TL;DR: In a wide-ranging Invest Like The Best episode, legendary energy trader and philanthropist John Arnold shares explosive insights from his recent China trip — witnessing unmatched manufacturing speed, EV factories built in 17 months, and a robotics explosion with over 100 companies. He breaks down how he built the ultimate “seat” in natural gas trading (from Enron to Centaurus), the massive AI data center power surge reshaping US energy markets, why permitting reform and fighting NIMBYism are critical, the long-term promise of nuclear, geothermal, and solar+battery tech, plus his systems-thinking approach to reforming criminal justice, healthcare, education, and journalism. This is essential listening for investors, policymakers, and anyone tracking AI, China competition, and American infrastructure.
Key Takeaways from the John Arnold Podcast
China has leapfrogged the West in manufacturing speed and scale thanks to a highly educated, entrepreneurial workforce, domestic supply chains within 200 miles, and government subsidies fueling intense robotics and EV competition.
NIO’s EV factory went from groundbreaking to first car in just 17 months with heavy robotics automation — compared to 40-100-year-old US auto plants — proving China’s ability to build quality products at unbeatable prices.
Over 100 robotics companies in China compete fiercely under five-year plans; winners get supported while losers face consolidation, creating better technology but overcapacity challenges.
John Arnold’s trading edge came from building the “best seat” in the industry: massive risk capital, top talent, proprietary data systems, 3-and-35 fees, and market-making dominance in natural gas futures and basis trades.
His entrepreneurial spark started with high-school baseball card arbitrage in the late 80s/early 90s — spotting geographic price differences on bulletin boards and scaling nationally before age 17.
US energy system goals: affordable, reliable, clean electrons with energy security and good jobs — but politics shift every 4-8 years while infrastructure takes decades to build.
AI data centers are the biggest new demand driver in decades — hyperscalers prioritize speed over price and will last at least through 2030, creating enormous opportunities in energy assets.
NIMBYism and outdated permitting are the #1 bottleneck; China builds without these delays, threatening US competitiveness unless federal reform passes this year.
Inter-regional transmission lines are a win-win-win solution (lower costs, higher reliability, fewer emissions) but remain nearly impossible to permit and build in the US.
Nuclear (traditional AP-1000 and new SMRs/fusion) is promising for clean baseload but extremely expensive and 10-15 years away at scale; advanced geothermal is the most exciting near-term play for data centers.
Solar panel costs keep falling (mostly made in China) but total delivered PPA prices are 50%+ higher than 2020 lows due to land, labor, transmission, and capital costs; robotics and co-located data centers are key innovations.
Foundations should intentionally get weaker over time because institutions become bureaucratic and risk-averse; true philanthropy takes risks on long-term systems change that governments and markets avoid.
Criminal justice reform should focus on increasing probability of getting caught (via tech, cameras, drones) rather than harsher penalties; cash bail is outdated — pretrial decisions should prioritize public safety and court appearance.
Healthcare is riddled with market failures (asymmetric information, third-party payers, regulatory gaming like skin-substitute loopholes) requiring heavy regulation that private actors constantly exploit.
EdTech and AI in education have shown almost no real-world outcome improvements despite 20 years of hype; the human teacher-student connection remains irreplaceable for engagement.
Journalism is the “fourth estate” and deserves philanthropic support for local investigative work that commercial models no longer sustain.
Detailed Summary of the John Arnold Interview on Invest Like The Best
John Arnold’s Eye-Opening Trip to China: Speed, Scale & Robotics Revolution
Arnold spent a week touring factories and meeting executives across China. His biggest takeaway: the country has transformed in 30 years from copying the West to leapfrogging it. Key advantages include a highly educated, entrepreneurial population, rapid capital deployment, deep domestic market, and hyper-flexible skilled labor that can scale by the thousands overnight. Supply chains are incredibly tight — one battery executive noted every supplier is within 200 miles and reachable same-day.
The NIO EV factory visit was the standout: from first shovel to first car rolling off the line in just 17 months. The plant uses extensive robotics while still employing workers, producing premium EVs ($40k-$80k range) plus a new model under $10k. Contrast this with US auto plants averaging 40 years old (some over 100). China now has over 100 EV manufacturers and over 100 robotics companies, fueled by provincial subsidies tied to five-year strategic plans. Intense competition drives innovation but also overcapacity — China is now shifting to support “winners” for global dominance.
Flights between the US and China are down 70%, Western expats in Shanghai down 50-75%, and American students down 90%. Arnold sensed growing Chinese confidence: “We used to copy the West. Now we will teach the West.”
How John Arnold Became the World’s Greatest Energy Trader: Discipline, Data & the “Best Seat”
Starting at Enron in 1995 at age 21, Arnold built Centaurus Advisors after the 2001 collapse. He describes total immersion: 12-hour trading days, industry nights, and constant mental replay. His edge wasn’t just talent — it was engineering the ultimate industry “seat”: massive retained earnings from early success allowed 3-and-35 fees, top talent pay, proprietary data systems, custom trade-entry platforms, and the ability to market-make in natural gas futures at Henry Hub plus basis trades across regions.
His teenage baseball card arbitrage business (late 80s/early 90s) taught him the same lessons: spotting geographic price discrepancies on early internet bulletin boards, scaling nationally, and knowing every product’s real-time value. This directly translated to knowing every natural gas month’s fair value better than anyone else.
The State of US Energy Markets & the AI Data Center Tsunami
Arnold outlines five core goals for the US energy system: affordable, reliable, clean, secure, and job-creating electrons. America has abundant resources (oil, gas, wind, solar) and innovation capacity, but politics reset priorities every election cycle while infrastructure takes decades.
The wildcard: AI data centers. Hyperscalers (the most profitable companies ever) are pouring capital in with zero price sensitivity and maximum speed urgency. Demand visibility is clear through 2030 and will create massive opportunities for new energy technologies.
The biggest threat to US competitiveness is our inability to build. NIMBY (“Not In My Backyard”) opposition, endless lawsuits, and multi-veto-point permitting have stalled projects for 10+ years. China faces none of this friction. Arnold started a transmission company five years ago because inter-regional lines deliver lower costs, higher reliability, lower emissions, and more jobs — yet private capital has largely given up. He remains optimistic about bipartisan federal permitting reform passing in 2026.
Nuclear, Geothermal, Solar, Batteries & the Path to Energy Abundance
Traditional nuclear (Vogtle AP-1000) proved we can build safe plants — but at enormous cost and labor intensity (peak 9,000 skilled workers). Small modular reactors (SMRs) and fusion are promising but 10-15 years from scale and must compete on pure economics. Advanced geothermal stands out as the most exciting near-term baseload solution leveraging existing oil-and-gas talent.
Solar panels get cheaper every year (China-dominated manufacturing), but total delivered cost has risen 50%+ since 2020 lows due to land, labor, transmission, and higher capital costs. Battery prices are also now rising with lithium. Robotics for solar construction and co-locating data centers with generation are critical innovations.
Housing Reform Parallels & the Broader Permitting Crisis
Arnold draws direct parallels between energy and housing: YIMBY (“Yes In My Backyard”) movements in California, Austin, Montana, and the Northeast show bipartisan recognition that affordability is now a voter priority. Politicians face short-term pressure for subsidies instead of long-term deregulation — the same trap energy faces.
John Arnold’s Revolutionary Approach to Philanthropy & Systems Reform
Arnold’s foundation deliberately aims to become less powerful over time because institutions grow bureaucratic and risk-averse. Philanthropy’s unique role: take political and economic risks on long-term systems change that markets and governments avoid.
In criminal justice, the focus is probability of getting caught over penalty severity. Reforms in New Jersey and Kentucky replaced cash bail with risk-based pretrial detention. Tech (cameras, drones, real-time crime centers) offers solutions but raises privacy trade-offs that wealthy communities have already embraced.
Healthcare is plagued by asymmetric information and regulatory arbitrage (e.g., constant new skin-substitute products to reset pricing windows). Education outcomes have declined despite massive EdTech investment — AI + AR/VR may finally crack engagement, but 20 years of hype have delivered zero aggregate gains. Journalism, the “fourth estate,” needs philanthropic funding for local investigative work that commercial models abandoned.
Final Thoughts: Why John Arnold’s Interview Matters in 2026
John Arnold’s conversation is a masterclass in systems thinking applied across trading, energy, China geopolitics, and American reform. His China trip should serve as a wake-up call: while the US debates, China executes at unprecedented speed and scale. The AI data center boom gives America a once-in-a-generation chance to rebuild energy infrastructure — but only if we fix permitting, transmission, and NIMBY gridlock now.
For investors, the signals are clear: advanced geothermal, solar robotics, and transmission technologies are poised for explosive growth. For policymakers, the message is urgent — energy abundance is national security in the AI age. And for philanthropists and operators everywhere, Arnold’s insistence that institutions must weaken to stay effective is a profound leadership lesson.
If you’re tracking AI infrastructure, US-China competition, or systems-level change, this is one of the highest-signal conversations of 2026.
Andrej Karpathy Autoresearch is the breakout open-source project of March 2026. Released just days ago, this ~630-line minimalist framework lets AI coding agents (Claude, GPT-4o, Gemini, etc.) autonomously run real LLM pretraining research experiments on a single GPU while you sleep. It’s already producing real improvements that transfer to bigger models and hitting new leaderboard entries.
If you’re searching for the best “Karpathy Autoresearch tutorial”, “how to setup autoresearch”, or “AI agents LLM experiments overnight”, this is the most detailed, up-to-date guide on the internet.
TL;DR – Karpathy Autoresearch in 60 Seconds
Karpathy Autoresearch is a single-GPU agent-driven system where:
You write the high-level research goal in program.md
The AI agent only edits train.py
Every experiment runs for exactly 5 wall-clock minutes
Better val_bpb score? Keep the git commit. Worse? Auto-revert
~100 experiments per night → real breakthroughs while you sleep
What Is Andrej Karpathy Autoresearch and Why Everyone Is Talking About It
In his viral X post Karpathy wrote: “One day, frontier AI research used to be done by meat computers… That era is long gone.”
Autoresearch takes his famous nanochat training core, strips it to a single file, and hands the entire research loop to AI agents. The human only updates the strategy in program.md. The agent experiments with architecture, optimizers, attention variants, RoPE, batch sizes — everything — and keeps only improvements via git.
Real results: Improvements discovered on depth-12 models already transfer to depth-24 and are landing nanochat a new “time to GPT-2” leaderboard spot after ~650 experiments.
How Autoresearch Actually Works (Step-by-Step)
The system is deliberately minimal so agents can understand and modify it instantly:
prepare.py (DO NOT TOUCH) – dataset + BPE tokenizer.
program.md – your research instructions and constraints.
train.py – full GPT model + Muon+AdamW optimizer + training loop. This is the ONLY file the agent edits.
Karpathy Autoresearch Setup Guides – NVIDIA, Mac & Windows
1. Official NVIDIA Setup (5 Minutes)
curl -LsSf https://astral.sh/uv/install.sh | sh
git clone https://github.com/karpathy/autoresearch.git
cd autoresearch
uv sync
uv run prepare.py
uv run train.py
Then paste the repo into Claude/GPT-4o and say: “Have a look at program.md and let’s kick off a new experiment!”
Humans edit only program.md; agents do everything else
Fixed 5-minute runs = fair comparisons
val_bpb metric is reliable and vocab-size independent
Real transferable gains proven (depth-12 → depth-24)
Mac, Windows, and NVIDIA versions all live today
Community already reporting 19%+ overnight gains
This is the seed of swarm-style AI research labs
Detailed Summary of Andrej Karpathy Autoresearch
Released March 7-8 2026, Autoresearch is the minimal viable version of fully autonomous LLM research. Human strategy → AI agent execution → git-tracked progress. The entire system is designed to be forked and scaled into a massively collaborative platform where thousands of agents contribute “tiny papers” across branches.
Karpathy’s Vision: From Solo Agents to Swarm Research
In his follow-up X post Karpathy explained the bigger idea: asynchronously massively collaborative agent research (SETI@home style). The current code is just the seed — the real future is agents forking, discussing via GitHub, and accumulating discoveries across thousands of branches.
From the explosive announcement post to Mac/Windows forks appearing in 48 hours and real leaderboard improvements already confirmed, this project feels like the moment everything changed. Anyone with a GPU now has a full AI research team working overnight for almost zero cost. The human’s only job is writing the research strategy prompt. The era of manual experimentation is ending — and it’s ending fast. Download the repo tonight, link Karpathy’s original X posts for context, write your first program.md, and wake up to new discoveries.
Ready to Start? Clone the repo, run the three commands, and let your agents take over. Bookmark this page — new forks and agent-generated papers will drop daily. Drop your overnight results in the comments!
Naval Ravikant sat down with Eric Jorgenson (author of The Almanack of Naval Ravikant) for a 4+ hour megasode on the Smart Friends podcast — his most comprehensive public conversation in years. Five years after the original Almanack, Naval updates and expands his thinking across five pillars: building wealth, building judgment, learning happiness, saving yourself, and philosophy. The biggest shifts? He now leans heavily on David Deutsch’s definition of wealth as “the set of physical transformations you can affect,” sees AI as the ultimate leverage tool (not a replacement for human judgment), and has moved past chasing happiness toward pursuing truth, love, and beauty. He’s working on a new stealth company, has met roughly a dozen people he considers genuinely enlightened, and believes the most important formula for life is: stay healthy, get wealthy, seek truth, give love, and create beauty.
Key Takeaways
On Wealth
Deutsch’s definition is deeper than “assets that earn while you sleep.” Naval now defines wealth as the set of physical transformations you can affect — and the biggest driver of that capability is knowledge, not capital. If you removed Elon Musk from SpaceX, the wealth doesn’t just transfer. It disappears. The value is in the knowledge, not in the factory.
Knowledge is the real multiplier. Ten modern humans can change more than ten paleolithic humans — not because of capital, but because of accumulated knowledge. As a society gains knowledge, it becomes wealthier. As an individual gains knowledge, they become wealthier. This is why Marx was fundamentally wrong: value is not in the capital. It’s in the people doing things.
Ethical wealth creation is not only possible — it’s the norm in free markets. The common critiques of capitalism target cronyism, money printing, and government favoritism. None of that is free market capitalism. Real capitalism is a minimum structured set of rules that channels competitive energy into creating property instead of fighting over it.
This is the greatest period for wealth creation in human history. More knowledge, more capital, more leverage than ever before. If you’re moderately intelligent, not afraid of hard work, and flexible, you can do extremely well. But it takes 10 to 30 years. There are no get-rich-quick schemes.
AI is the ultimate leverage tool, not a replacement. Software engineers aren’t being replaced by AI — AI is letting software engineers replace everybody else. The people saying “programming is dead” are completely wrong. The most leveraged engineers are the ones building AI systems, then the ones using them. AI is great when wrong answers are okay. For anything requiring creativity or judgment at the edge, you still need humans.
Good products are hard to vary. Drawing from David Deutsch’s epistemology, Naval argues that the best products — like the iPhone — are like good scientific explanations: you can’t change the details without breaking them. They encapsulate deep knowledge, have surprising reach into applications the creators never imagined, and exhibit winner-take-all network effects.
On Judgment
Judgment is the most valuable thing in the age of infinite leverage. The difference between a CEO who’s right 80% of the time and one who’s right 85% of the time is worth billions of dollars when you’re steering a multi-trillion dollar ship. Direction matters more than any other single thing.
Judgment evolves into taste. First you reason through decisions logically. Then your subconscious enters into it (judgment). Then your whole body reacts to it (taste). The Rick Rubins and Steve Jobs of the world operate at the level of taste — they can’t fully explain why something is right, they just know. Naval says his investing is now “almost entirely taste.”
It takes time to develop your gut, but once it’s developed, don’t listen to anything else. This applies to people, investments, products, and life decisions. Older people have very good judgment about other people because human interaction is the one area where everyone is constantly gaining experience.
Learn from specific to general, not general to specific. This is Seneca’s insight: encounter reality, test it, learn from it, then generalize. Going the other way creates what Nassim Taleb calls “intellectual yet idiot” — someone overeducated and underpracticed. If you want to be a philosopher king, first be a king.
Hard work is non-negotiable, but it shouldn’t feel like work. The most productive people work intensely on problems that fascinate them. The biggest breakthroughs come during deep immersion — 24-36 hour sessions where you can’t put the problem down. But if it feels like forced drudgery, you’ll lose to someone who finds it genuinely enjoyable.
AI doesn’t have judgment. It has incredible information retrieval — the ability to cross-correlate all human knowledge and return the conventional correct answer. But for creative problems, novel situations, or anything requiring values and binding principles, AI falls short. It raises the tide for everyone, but there’s no “alpha” in the AI answer because everyone gets the same one.
On Happiness
Naval’s latest thinking: he’s not sure happiness exists. Happiness is a construct of the mind, a thought claiming to be a state. When the thought disappears, there’s no “you” there to be happy or unhappy. His focus has shifted from pursuing happiness to cultivating peace — being okay with things as they are, with few and consciously chosen desires.
The three big ones are wealth, health, and happiness — pursued in that order, but their importance is reversed. Naturally happy people have the greatest gift and don’t need the others. Health matters more than wealth (a sick man only wants one thing). But most people will pursue them wealth-first simply because of energy, flexibility, and the practical reality of financial obligations when young.
The more you think about yourself, the less happy you’ll be. Depressed people ruminate on themselves. Having motives larger than yourself — your mission, your children, your contribution — makes setbacks hurt less because they’re not personal. This is why Naval says: live for something larger than yourself, but only on your own terms.
Chronic unhappiness is an ego trip. Acute unhappiness is real and useful — it’s a signal. But chronic unhappiness is wanting to feel more “you,” more separate, more important. Identity creates motivated reasoning. The thinner your identity, the more clearly you can see reality.
The modern devil is cheap dopamine. Every deadly sin is a form of cheap dopamine. The direct pursuit of pleasure causes addiction and dopamine burnout. Virtues are the opposite — long-term individually beneficial behaviors that also create win-win outcomes for society. All virtues can be reinterpreted as long-term selfishness.
Meditation isn’t about enlightenment — it’s about self-observation. When you’re more self-aware, you catch your mind doing things that aren’t in your long-term interest. You can reset, question whether a desire matters, and choose whether to reinterpret a situation or address the underlying problem.
You don’t store memories — you store interpretations of memories. Changing those interpretations is what forgiveness actually is. Psychedelics, meditation, and honest introspection all work partly because they allow you to reprocess and reframe past experiences.
On Saving Yourself
Nobody is coming to save you. An ideal life is designed, not inherited. Naval claims his life is “really good” — at any given time he’s doing what he wants, nothing is obligatory, and if something stops being enjoyable, he changes it very quickly. This requires ruthless honesty about relationships, obligations, and what you actually want.
Every relationship is transactional — and that’s okay. Naval draws a hard line against false obligations. He doesn’t attend obligatory events, weddings, or ritualistic celebrations. The result: he’s left with people who are similarly free, low-ego, and voluntarily present. Nobody takes each other for granted.
The secret to a happy relationship is two happy people. You can’t be happy with your spouse if you’re not happy alone. Happiness is personal and must be tackled individually. Putting relationships ahead of your own inner work gets you neither.
God, kids, or mission — find at least one. Naval has all three. His “God” is personal and unarticulated. Family is irreplaceable (expand your definition as you age). And mission means actively building — right now that’s a stealth company and this kind of conversation.
Explore widely, then invest deeply. Modern society has made exploration easy, but all the benefits come from compound interest. You don’t learn through 10,000 hours — you learn through 10,000 honest iterations. Do, reflect, change, try again. Once your judgment tells you what fits, stop exploring and start compounding.
The only true test of intelligence is whether you get what you want out of life. This is a two-part test: choosing what to want (the harder part) and then getting it. If you pass that test, there’s nothing to be envious of. Choose inspiration over envy — find the part of someone else’s success that resonates with something inside you.
On Philosophy
Naval’s philosophical foundation: evolution + Buddhism + Deutsch. Evolution explains humans. Buddhism is the most time-tested internal philosophy. David Deutsch’s epistemology — good explanations that are hard to vary, conjecture and criticism — provides the best framework for understanding progress in science, business, and society.
Truth is a crystal in the multiverse. In the many-worlds interpretation, true knowledge replicates across more universes because it works. False knowledge is infinitely variable but gets eliminated. The “Rickiest of the Ricks” (from Rick and Morty) is the most truth-oriented version — lowest ego, least motivated reasoning, operating from the most universal principles.
Enlightenment is binary, not a path. Naval has met about a dozen people he considers genuinely enlightened. They share one trait: persistent experience of “no self.” Nothing bothers them — not cancer diagnoses, not personal failures. It’s not that they lack desire or capability. They’re often more effective, not less. But they don’t take anything personally.
The self is just a thought. When you look for the self — really look — you can never pin it down. It’s like a burning stick whirled in a circle that appears to be a flaming wheel. Just thoughts convincing you there’s someone there. Enlightened people have seen through this and their default state is pure awareness.
The real truths are heresies. There’s a 2×2 matrix of truth vs. spreadability: conventional wisdom (true and spreads), fake news (false and spreads), nonsense (false and doesn’t spread), and heresies (true but don’t spread). Heresies don’t spread because any truth that lowers group cohesion gets suppressed. This is why the greatest philosophers are read long after their deaths — they told harsh truths while alive that society wasn’t ready to hear.
Read the best 100 books over and over. Naval reads authors, not books. He reads philosophers, not authors. He’ll consume everything by Schopenhauer, Deutsch, Osho, Taleb, Krishnamurti — and until he’s finished everything by one thinker, he won’t move to the next. He judges philosophers by the outcomes they achieved in their own lives. A philosophy that led its creator to misery is suspect.
Simulation theory is just modern religion. Every era maps its dominant technology onto religion — the sun god, the god-king, the mechanical universe, and now the computational universe. Naval finds understanding relativity, quantum physics, and cosmology more satisfying than saying “the universe is a computer.” He maps Buddhism onto simulation theory (the white room in the Matrix = pure consciousness = enlightenment) but considers sim theory unfalsifiable and reductive.
Detailed Summary
Part 1: Building Wealth (0:00 – 37:49)
The conversation opens with Naval updating his definition of wealth through David Deutsch’s lens. Where he originally defined wealth as “assets that earn while you sleep” — a practical definition aimed at escaping the 9-to-5 trap — he now sees wealth more expansively as the set of physical transformations you can affect. This reframes wealth from a passive accumulation game to an active capability powered primarily by knowledge.
Naval makes a forceful case that knowledge, not capital, is the real wealth multiplier. He uses SpaceX as his central example: remove Elon Musk and the wealth doesn’t just redistribute — it evaporates, because the knowledge that makes SpaceX valuable disappears with the people who hold it. This is why Marxism fundamentally fails. The value isn’t in the factories. You can’t slice it up and redistribute it like gold.
He addresses the ethics of capitalism head-on, acknowledging that the majority of economic activity involves people fighting over existing wealth rather than creating new wealth (he draws an analogy to nature, where parasitic species outnumber standalone ones six to one). But he argues that free market capitalism, at its core, is the system that channels competitive energy into creation rather than destruction. The critiques of capitalism — bank bailouts, cronyism, government favoritism — target corruption of the system, not the system itself.
On AI and leverage, Naval makes what may be his most quotable claim: “AI is not going to replace software engineers — AI is going to let software engineers replace everybody else.” He sees AI as an incredible information retrieval and calculation tool that raises the floor for everyone, but provides no lasting competitive edge because everyone has access to the same answers. The real edge comes from judgment, creativity, and taste — the things AI cannot provide.
He connects Deutsch’s concept of “good explanations” to product building. Good products, like good scientific theories, are hard to vary — you can’t change the details without breaking them. The iPhone’s original form factor is still essentially unchanged because they nailed it. He notes that all technology has winner-take-all dynamics, and the best products amortize their development costs over the largest user base, making it impossible for any amount of money to buy a better alternative.
Part 2: Building Judgment (37:49 – 1:12:30)
Naval describes judgment as the single most important capability in an age of infinite leverage. He traces its development from conscious logical reasoning through subconscious intuition to full-body taste — the stage where you simply know what’s right without being able to articulate why.
He quotes John Cleese on creative problem-solving: “You simply have to let your mind rest against the problem in a friendly, persistent way.” This captures Naval’s view that breakthroughs require both intense focus and a relaxed, non-forcing attitude. He shares his own experience writing a compiler in college, where his most productive sessions were 24-36 hour marathons because it took hours just to reload the problem into his head after time away.
The section includes an important distinction between AI’s capabilities and human judgment. AI can cross-correlate all human knowledge and deliver the conventional correct answer for solved problems. But it lacks values, binding principles, and the ability to handle novel situations with idiosyncratic context. Naval sees AI as “magic” that looks like intelligence because of its staggering information retrieval, but it operates as a one-size-fits-all system trained on textbooks and data labelers’ opinions.
He emphasizes learning from specific to general (Seneca’s principle), warns against academic over-education without practice (Taleb’s “intellectual yet idiot”), and shares how he now reads less but more deliberately — using reading to spark his own thinking rather than absorbing others’ ideas for regurgitation. He singles out Schopenhauer as a writer where every sentence is crafted and you get something different from the same essay on every re-read.
Part 3: Learning Happiness (1:12:30 – 2:15:17)
This is the most philosophical section, where Naval significantly updates his earlier thinking. He admits he’s “not sure happiness exists” as a distinct state, framing it instead as a thought that claims to be a state. When the thought disappears, there’s no observer left to be happy or unhappy. This is deeply Buddhist — the no-self doctrine applied to emotional states.
His practical advice centers on cultivating peace rather than chasing happiness. He wants few, consciously chosen desires. He wants to act for reasons larger than himself (which paradoxically makes failure hurt less). And he wants to create space for authentic joy rather than ritualistic obligation.
Naval introduces his framework of “truth, love, and beauty” as what remains after health and wealth are handled. Truth is pursued because even uncomfortable truths make life better (he uses The Matrix’s Neo vs. Cipher as his central illustration). Love is best experienced as giving rather than receiving — falling in love with someone or something is the high, not being loved. Beauty is creation — the highest human art form and what separates his view from pure Buddhist quietism.
He discusses William Glasser’s choice theory at length, presenting the controversial view that depression often originates as a series of childhood behavioral choices that became unconscious habits. While acknowledging chemical components, he argues the explanation must be offered at the same level as the question — and that changing your brain through honest self-examination is more sustainable than long-term pharmaceutical intervention.
The section on meditation is refreshingly honest: the first 20 minutes your mind goes berserk, then it calms, and most of the benefit comes from simply acknowledging emotions rather than solving them. He describes a personal experience of extreme unhappiness where a part of him was simultaneously watching and recognizing “there’s nothing actually here — you’re creating a drama to feel important.”
Part 4: Saving Yourself (2:15:17 – 2:50:17)
Naval gets deeply personal about how he’s designed his life. He claims to have “an amazing life” where at any given time he’s doing exactly what he wants. Nothing is obligatory. Every relationship is voluntary. He maintains zero estranged family members while refusing to attend weddings, obligatory events, or ritualistic celebrations.
His stance on relationships is uncompromising: every relationship is transactional (providing mutual value), and pretending otherwise creates false obligations that breed resentment. He refuses to train his children to say “thank you” on command — if they feel genuine gratitude, it will emerge naturally. He believes the only real relationships are peer relationships, even employer-employee ones.
The exploration-vs-investment framework is one of the most actionable parts of the conversation. Modern society has made exploration easy (you can fly anywhere, enter any career, date infinitely), but all benefits come from compound interest — which requires commitment. The key transition is recognizing when to stop exploring and start investing. Naval argues that learning happens through honest iterations (do, reflect, change, repeat), not hours logged.
He names his sources of meaning: a personal relationship with “whatever this is” (God, loosely), his children and family, and his current stealth company. He explicitly says he doesn’t feel qualified to write a book about enlightenment because he hasn’t fully explored it himself — and he’s partly just lazy.
Part 5: Philosophy (2:50:17 – End)
The final section weaves together Naval’s philosophical commitments: evolution, Buddhism, and David Deutsch’s epistemology. He frames truth as “a crystal in the multiverse” — in the many-worlds interpretation, truth replicates because it works, while falsehood is infinitely variable but gets eliminated through skin-in-the-game dynamics.
His account of enlightened people is fascinating and specific. He’s met about a dozen, verified to his own satisfaction through sustained observation (watching them encounter genuinely bad events without perturbation). They include well-known names like Rupert Spira, Mooji, and Sadhguru, plus personal friends and lesser-known figures. The key trait: a persistent experience of no self. It’s binary — not a gradient. They’re often more capable, not less. More authentic desires, less mimetic behavior, less ego-driven.
He maps Buddhism onto simulation theory in an extended riff: breaking out of the Matrix is the quest for enlightenment, the white room is pure consciousness, and the boredom of the white room explains why consciousness generates infinite forms (why God forgets himself and goes back into the game). But he ultimately considers simulation theory a “lousy theory” — unfalsifiable, reductive, and just the latest version of mapping our dominant technology onto religion.
The conversation closes with Naval’s 2×2 matrix of truth and spreadability (conventional wisdom, fake news, heresies, nonsense) and the observation that the only things that make it through the information environment are fake news — because conventional wisdom doesn’t need spreading, heresies can’t spread, and nonsense goes nowhere. The real truths, the heresies, can only be discovered, whispered, and perhaps read.
Thoughts
Five years after The Almanack of Naval Ravikant, this megasode feels like Naval 3.0. The original Naval (pre-Almanack) was focused on practical wealth creation and startup wisdom. Almanack Naval synthesized that with Eastern philosophy and general life principles. This version integrates David Deutsch’s epistemology into everything — wealth becomes knowledge creation, good products become good explanations, and even enlightenment gets framed through the multiverse.
What strikes me most is the honesty about contradictions. Naval simultaneously says he’s “not sure happiness exists” while describing his life as amazing. He advocates dropping all obligations while maintaining zero estranged family members. He promotes laziness while admitting he’s working harder than ever on his new company. These aren’t inconsistencies — they’re the natural texture of a philosophy that’s been lived rather than theorized.
The AI section is worth paying attention to. In a world where every AI influencer is either panicking about job replacement or promising utopia, Naval’s take is refreshingly grounded: AI is leverage, like every technology before it. It raises the floor for everyone. It provides no lasting edge because everyone gets the same answer. The edge comes from judgment, taste, and creativity — which are developed through experience, not downloaded from a model.
His list of “enlightened” people is going to generate the most discussion and controversy. Claiming to have personally verified a dozen enlightened beings is a bold statement from someone who also says he’s “not sure there’s such a thing as enlightenment.” But it’s consistent with his framework: enlightenment isn’t a special state. It’s the absence of a constructed self. It’s binary. And it doesn’t prevent you from running a company, dating, or living a fully functional life.
The deepest insight might be the simplest: stay healthy, get wealthy, seek truth, give love, and create beauty. If you internalize nothing else from these four hours, that five-part formula is worth the price of admission — which, in keeping with Naval’s philosophy, is free.
This article is a summary and analysis of Naval Ravikant’s 4-hour megasode on the Smart Friends podcast with Eric Jorgenson, released January 2026. The full episode is available for free on YouTube and all major podcast platforms.
Boris Cherny, creator and head of Claude Code at Anthropic, sat down with Lenny Rachitsky on Lenny’s Podcast to drop one of the most consequential interviews in recent tech history. With Claude Code now responsible for 4% of all public GitHub commits — and growing faster every day — Cherny laid out a vision where traditional coding is a solved problem and the real frontier has shifted to idea generation, agentic AI, and a new role he calls the “Builder.”
TLDW (Too Long; Didn’t Watch)
Boris Cherny, the head of Claude Code at Anthropic, hasn’t manually written a single line of code since November 2025 — and he ships 10 to 30 pull requests every day. Claude Code now accounts for 4% of all public GitHub commits and is projected to reach 20% by end of 2026. Cherny believes coding as we know it is “solved” and that the future belongs to generalist “Builders” who blend product thinking, design sense, and AI orchestration. He advocates for underfunding teams, giving engineers unlimited tokens, building products for the model six months from now (not today), and following the “bitter lesson” of betting on the most general model. The Cowork product — Anthropic’s agentic tool for non-technical tasks — was built in just 10 days using Claude Code itself. Cherny also revealed three layers of AI safety at Anthropic: mechanistic interpretability, evals, and real-world monitoring.
Key Takeaways
1. Claude Code’s Growth Is Staggering
Claude Code now authors approximately 4% of all public GitHub commits, and Anthropic believes the real number is significantly higher when private repositories are included. Daily active users doubled in the month before this interview, and the growth curve isn’t just rising — it’s accelerating. Semi Analysis predicted Claude Code will reach 20% of all GitHub commits by end of 2026. Claude Code alone is generating roughly $2 billion in revenue, with Anthropic overall at approximately $15 billion.
2. 100% AI-Written Code Is the New Normal
Cherny hasn’t manually edited a single line of code since November 2025. He ships 10 to 30 pull requests per day, making him one of the most prolific engineers at Anthropic — all through Claude Code. He still reviews code and maintains human checkpoints, but the actual writing of code is entirely handled by AI. Claude also reviews 100% of pull requests at Anthropic before human review.
3. Coding Is “Solved” — The Frontier Has Shifted
In Cherny’s view, coding — at least the kind of programming most engineers do — is a solved problem. The new frontier is idea generation. Claude is already analyzing bug reports and telemetry data to propose its own fixes and suggest what to build next. The shift is from “tool” to “co-worker.” Cherny expects this to become increasingly true across every codebase and tech stack over the coming months.
4. The Rise of the “Builder” Role
Traditional role boundaries between engineer, product manager, and designer are dissolving. On the Claude Code team, everyone codes — the PM, the engineering manager, the designer, the finance person, the data scientist. Cherny predicts the title “Software Engineer” will start disappearing by end of 2026, replaced by something like “Builder” — a generalist who blends design sense, business logic, technical orchestration, and user empathy.
5. Underfunding Teams Is a Feature, Not a Bug
Cherny advocates deliberately underfunding teams as a strategy. When you assign one engineer to a project instead of five, they’re forced to leverage Claude Code to automate everything possible. This isn’t about cost-cutting — it’s about forcing innovation through constraint. The results at Anthropic have been dramatic: while the engineering team grew roughly 4x, productivity per engineer increased 200% in terms of pull requests shipped.
6. Give Engineers Unlimited Tokens
Rather than hiring more headcount, Cherny’s advice to CTOs is to give engineers as many tokens as possible. Let them experiment with the most capable models without worrying about cost. The most innovative ideas come from people pushing AI to its limits. Some Anthropic engineers are spending hundreds of thousands of dollars per month in tokens. Optimize costs later — only after you’ve found the idea that works.
7. Build for the Model Six Months From Now
One of Cherny’s most actionable insights: don’t build for today’s model capabilities — build for where the model will be in six months. Early versions of Claude Code only wrote about 20% of Cherny’s code. But the team bet on exponential improvement, and when Opus 4 and Sonnet 4 arrived, product-market fit clicked instantly. This means your product might feel rough at first, but when the next model generation drops, you’ll be perfectly positioned.
8. The Bitter Lesson Applied to Product
Cherny references Rich Sutton’s famous “Bitter Lesson” blog post as a core principle for the Claude Code team: the more general model will always outperform the more specific one. In practice, this means avoiding rigid workflows and orchestration scaffolding around AI models. Don’t box the model in. Give it tools, give it a goal, and let it figure out the path. Scaffolding might improve performance 10-20%, but those gains get wiped out with the next model generation.
9. Latent Demand — The Most Important Product Principle
Cherny calls latent demand “the single most important principle in product.” The idea: watch how people misuse or hack your product for purposes you didn’t design it for. That’s where your next product lives. Facebook Marketplace came from 40% of Facebook Group posts being buy-and-sell. Cowork came from non-engineers using Claude Code’s terminal for things like growing tomato plants, analyzing genomes, and recovering wedding photos from corrupted hard drives. There’s also a new dimension: watching what the model is trying to do and building tools to make that easier.
10. Cowork Was Built in 10 Days
Anthropic’s Cowork product — their agentic tool for non-technical tasks — was implemented by a small team in just 10 days, using Claude Code to build its own virtual machine and security scaffolding. Cowork was immediately a bigger hit than Claude Code was at launch. It can pay parking tickets, cancel subscriptions, manage project spreadsheets, message team members on Slack, respond to emails, and handle forms — and it’s growing faster than Claude Code did in its early days.
11. Three Layers of AI Safety at Anthropic
Cherny outlined three layers of safety: (1) Mechanistic interpretability — monitoring neurons inside the model to understand what it’s doing and detect things like deception at the neural level. (2) Evals — lab testing where the model is placed in synthetic situations to check alignment. (3) Real-world monitoring — releasing products as research previews to study unpredictable agent behavior in the wild. Claude Code was used internally for 4-5 months before public release specifically for safety study.
12. Why Boris Left Anthropic for Cursor (and Came Back After Two Weeks)
Cherny briefly left Anthropic to join Cursor, drawn by their focus on product quality. But within two weeks, he realized what he was missing: Anthropic’s safety mission. He described it as a psychological need — without mission-driven work, even building a great product wasn’t a substitute. He returned to Anthropic and the rest is history.
13. Manual Coding Skills Will Become Irrelevant in 1-2 Years
Cherny compared manual coding to assembly language — it’ll still exist beneath the surface, and understanding the fundamentals helps for now, but within a year or two it won’t matter for most engineers. He likened it to the printing press transition: a skill once limited to scribes became universal literacy over time. The volume of code created will explode while the cost drops dramatically.
14. Pro Tips for Using Claude Code Effectively
Cherny shared three specific tips: (1) Use the most capable model — currently Opus 4.6 with maximum effort enabled. Cheaper models often cost more tokens in the end because they require more correction and handholding. (2) Use Plan Mode — hit Shift+Tab twice in the terminal to enter plan mode, which tells the model not to write code yet. Go back and forth on the plan, then auto-accept edits once it looks good. Opus 4.6 will one-shot it correctly almost every time. (3) Explore different interfaces — Claude Code runs on terminal, desktop app, iOS, Android, web, Slack, GitHub, and IDE extensions. The same agent runs everywhere. Find what works for you.
Detailed Summary
The Origin Story of Claude Code
Claude Code began as a one-person hack. When Cherny joined Anthropic, he spent a month building weird prototypes that mostly never shipped, then spent another month doing post-training to understand the research side. He believes deeply that to build great products on AI, you have to understand “the layer under the layer” — meaning the model itself.
The first version was terminal-based and called “Claude CLI.” When he demoed it internally, it got two likes. Nobody thought a coding tool could be terminal-based. But the terminal form factor was chosen partly out of necessity (he was a solo developer) and partly because it was the only interface that could keep up with how fast the underlying model was improving.
The breakthrough moment during prototyping: Cherny gave the model a bash tool and asked it what music he was listening to. The model figured out — without any specific instructions — how to use the bash tool to answer that question. That moment of emergent tool use convinced him he was onto something.
The Growth Trajectory
Claude Code was released externally in February 2025 and was not immediately a hit. It took months for people to understand what it was. The terminal interface was alien to many. But internally at Anthropic, daily active users went vertical almost immediately.
There were multiple inflection points. The first major one was the release of Opus 4, which was Anthropic’s first ASL-3 class model. That’s when Claude Code’s growth went truly exponential. Another inflection came in November 2025 when Cherny personally crossed the 100% AI-written code threshold. The growth has continued to accelerate — it’s not just going up, it’s going up faster and faster.
The Spotify headline from the week of recording — “Spotify says its best developers haven’t written a line of code since December, thanks to AI” — underscored how mainstream the shift has become.
Thinking in Exponentials
Cherny emphasized that thinking in exponentials is deep in Anthropic’s DNA — three of their co-founders were the first three authors on the scaling laws paper. At Code with Claude (Anthropic’s developer conference) in May 2025, Cherny predicted that by year’s end, engineers might not need an IDE to code anymore. The room audibly gasped. But all he did was “trace the line” of the exponential curve of AI-written code.
The Printing Press Analogy
Cherny’s preferred historical analog for what’s happening is the printing press. In mid-1400s Europe, literacy was below 1%. A tiny class of scribes did all the reading and writing, employed by lords and kings who often couldn’t read themselves. After Gutenberg, more printed material was created in 50 years than in the previous thousand. Costs dropped 100x. Literacy rose to 70% globally over two centuries.
Cherny sees coding undergoing the same transition: a skill locked away in a tiny class of “scribes” (software engineers) is becoming accessible to everyone. What that unlocks is as unpredictable as the Renaissance was to someone in the 1400s. He also shared a remarkable historical detail — an interview with a scribe from the 1400s who was actually excited about the printing press because it freed them from copying books to focus on the artistic parts: illustration and bookbinding. Cherny felt a direct parallel to his own experience of being freed from coding tedium to focus on the creative and strategic parts of building.
What AI Transforms Next
Cherny believes roles adjacent to engineering — product management, design, data science — will be transformed next. The key technology enabling this is true agentic AI: not chatbots, but AI that can actually use tools and act in the world. Cowork is the first step in bringing this to non-technical users.
He was candid that this transition will be “very disruptive and painful for a lot of people” and that it’s a conversation society needs to have. Anthropic has hired economists, policy experts, and social impact specialists to help think through these implications.
The Latent Demand Framework in Depth
Cherny credited Fiona Fung, the founding manager of Facebook Marketplace, for popularizing the concept of latent demand. The examples are compelling: someone using Claude Code to grow tomato plants, another analyzing their genome, another recovering wedding photos from a corrupted hard drive, a data scientist who figured out how to install Node.js and use a terminal to run SQL analysis through Claude Code.
But Cherny added a new dimension specific to AI products: latent demand from the model itself. Rather than boxing the model into a predetermined workflow, observe what the model is trying to do and build to support that. At Anthropic they call this being “on distribution.” Give the model tools and goals, then let it figure out the path. The product is the model — everything else is minimal scaffolding.
Safety as a Core Differentiator
The interview made clear that safety isn’t just a talking point at Anthropic — it’s why everyone is there, including Cherny. He described the work of Chris Olah on mechanistic interpretability: studying model neurons at a granular level to understand how concepts are encoded, how planning works, and how to detect things like deception. A single neuron might correspond to a dozen concepts through a phenomenon called superposition.
Anthropic’s “race to the top” philosophy means open-sourcing safety tools even when they work for competing products. They released an open-source sandbox for running AI agents securely that works with any agent, not just Claude Code.
The Memory Leak Story
One of the most memorable anecdotes: Cherny was debugging a memory leak the traditional way — taking heap snapshots, using debuggers, analyzing traces. A newer engineer on the team simply told Claude Code: “Hey Claude, it seems like there’s a leak. Can you figure it out?” Claude Code took the heap snapshot, wrote itself a custom analysis tool on the fly, found the issue, and submitted a pull request — all faster than Cherny could do it manually. Even veterans of AI-assisted coding get stuck in old habits.
Personal Background and Post-AGI Plans
In a touching segment, Cherny and Rachitsky discovered they’re both from Odessa, Ukraine. Cherny’s grandfather was one of the first programmers in the Soviet Union, working with punch cards. Before joining Anthropic, Cherny lived in rural Japan where he learned to make miso — a process that takes months to years and taught him to think on long timescales. His post-AGI plan? Go back to making miso.
His book recommendations: Functional Programming in Scala (the best technical book he’s ever read), Accelerando by Charles Stross (captures the essence of this moment better than anything), and The Wandering Earth by Liu Cixin (Chinese sci-fi short stories from the Three Body Problem author).
Thoughts and Analysis
This interview is one of the most important conversations about the future of software engineering to come out in 2026. Here are some things worth sitting with:
The “solved” framing is provocative but precise. Cherny isn’t saying software engineering is solved — he’s saying the act of translating intent into working code is solved. The thinking, architecting, deciding-what-to-build, and ensuring-it’s-correct parts are very much unsolved. This distinction matters enormously and most of the pushback in the YouTube comments misses it.
The underfunding principle is genuinely counterintuitive. Most organizations respond to AI tools by trying to maintain headcount and “augment” existing workflows. Cherny’s approach is the opposite: reduce headcount on a project, give people unlimited AI tokens, and watch them figure out how to ship ten times faster. This is a fundamentally different organizational philosophy and one that most companies will resist until their competitors prove it works.
The “build for six months from now” advice is dangerous and brilliant. Dangerous because your product will underperform for months and investors will get nervous. Brilliant because when the next model drops, you’ll have the only product that takes full advantage of it. This is how Claude Code went from writing 20% of Cherny’s code to 100% — the product was ready when the model caught up.
The latent demand framework deserves serious study. The traditional version (watching users hack your product) is well-known from the Facebook era. The AI-native version (watching what the model is trying to do) is genuinely new. “The product is the model” is a deceptively simple statement that most AI product builders are still getting wrong by over-engineering workflows and scaffolding.
The Cowork trajectory matters more than Claude Code. Claude Code transforms engineers. Cowork transforms everyone else. If Cowork delivers on even half of what Cherny describes — paying tickets, managing project spreadsheets, responding to emails, canceling subscriptions — then the total addressable market dwarfs coding tools. The fact that it was built in 10 days and was an immediate hit suggests Anthropic has found product-market fit for agentic AI beyond engineering.
The safety discussion felt genuine. Cherny’s explanation of mechanistic interpretability — actually being able to monitor model neurons and detect deception — is one of the clearest public explanations of Anthropic’s safety approach. The fact that the safety mission is what brought him back from Cursor (where he lasted only two weeks) speaks to the culture. Whether you think safety is a genuine concern or a competitive moat, it’s clearly a core part of how Anthropic attracts and retains talent.
The elephant in the room: this is Anthropic’s head of product telling you to use more tokens. Multiple YouTube commenters pointed this out, and they’re right to flag it. But the underlying logic holds: if a less capable model requires more correction rounds and more tokens to achieve the same result, then the “cheaper” model isn’t actually cheaper. That’s a testable claim, and most engineers using these tools regularly will tell you it checks out.
Whether you agree with the “coding is solved” framing or not, the data is hard to argue with. Four percent of all GitHub commits. Two hundred percent productivity gains per engineer. A product that was built in 10 days and scaled to millions of users. These aren’t predictions — they’re measurements. And the curve is still accelerating.
This article is based on Boris Cherny’s appearance on Lenny’s Podcast, published February 19, 2026. Boris Cherny can be found on X/Twitter and at borischerny.com.
Peter Steinberger is joining OpenAI to drive the next generation of personal agents. He is a genius with a lot of amazing ideas about the future of very smart agents interacting with each other to do very useful things for people. We expect this will quickly become core to our…
In a move that underscores the intensifying race to dominate AI agent technology, OpenAI has brought aboard Peter Steinberger, the visionary Austrian developer behind the viral open-source project OpenClaw. As reported by Reuters, Fortune, and TechCrunch, the deal was announced on February 15, 2026. This isn’t a conventional acquisition but an “acquihire,” where Steinberger joins OpenAI to spearhead the development of next-generation personal AI agents.
Meanwhile, OpenClaw transitions to an independent foundation, remaining fully open-source with continued support from OpenAI (confirmed via Steinberger’s Blog and LinkedIn). This strategic alignment comes amid soaring interest in AI agents, a market projected by AInvest to hit $52.6 billion by 2030 with a 46.3% compound annual growth rate.
The announcement, made via a post on X by OpenAI CEO Sam Altman around 21:39 GMT, arrived just hours before widespread media coverage from outlets like Fortune. Steinberger swiftly confirmed the news in a personal blog post, emphasizing his excitement for the future while reaffirming OpenClaw’s independence.
The Rise of OpenClaw: From Playground Project to Phenomenon
OpenClaw, originally launched as Clawdbot in November 2025—a playful nod to Anthropic’s Claude model—quickly evolved into a powerhouse open-source AI agent framework designed for personal use (Fortune, Steinberger’s Blog, APIYI). Steinberger, who “vibe coded” the project solo after a three-year hiatus following the sale of his previous company for over $100 million, saw it explode in popularity. It amassed over 100,000 GitHub stars, drew 2 million visitors in a week, and became the fastest-growing repo in GitHub history—surpassing milestones of projects like React and Linux (Yahoo Finance, LinkedIn).
A trademark dispute with Anthropic prompted renames: first to Moltbot (evoking metamorphosis), then to OpenClaw in early 2026. The framework empowers AI to autonomously handle tasks on users’ devices, fostering a community focused on data ownership and multi-model support.
Key capabilities that fueled its hype include:
Managing emails and inboxes.
Booking flights, restaurant reservations, and flight check-ins.
Interacting with services like insurers.
Integrating with apps such as WhatsApp and Slack for task delegation.
Creating a “social network” for AI agents via features like Moltbook, which spawned 1.6 million agents (Source).
Despite its success, sustainability proved challenging. Steinberger personally shouldered infrastructure costs of $10,000 to $20,000 monthly, routing sponsorships to dependencies rather than himself, even as donations and corporate support (including from OpenAI) trickled in.
The Path to the Deal: Billion-Dollar Bids and Open-Source Principles
Prior to the announcement, Steinberger fielded billion-dollar acquisition offers from tech giants Meta and OpenAI (Yahoo Finance). Meta’s Mark Zuckerberg personally messaged Steinberger on WhatsApp, sparking a 10-minute debate over AI models, while OpenAI’s Sam Altman offered computational resources via a Cerebras partnership to boost agent performance. Meta aggressively pursued Steinberger and his team, but OpenAI advanced in talks to hire him and key contributors.
Steinberger spent the preceding week in San Francisco meeting AI labs, accessing unreleased research. He insisted any deal preserve OpenClaw’s open-source nature, likening it to Chrome and Chromium. Ultimately, OpenAI’s vision aligned best with his goal of accessible agents.
Key Announcements and Voices from the Frontlines
Sam Altman, in his X post on February 15, 2026, hailed Steinberger as a “genius with a lot of amazing ideas about the future of very smart agents interacting with each other to do very useful things for people.” He added, “We expect this will quickly become core to our product offerings. OpenClaw will live in a foundation as an open source project that OpenAI will continue to support. The future is going to be extremely multi-agent and it’s important to us to support open source as part of that.”
Steinberger’s blog post echoed this enthusiasm: “tl;dr: I’m joining OpenAI to work on bringing agents to everyone. OpenClaw will move to a foundation and stay open and independent. The last month was a whirlwind… When I started exploring AI, my goal was to have fun and inspire people… My next mission is to build an agent that even my mum can use… I’m a builder at heart… What I want is to change the world, not build a large company… The claw is the law.”
Strategic Implications: Opportunities and Challenges Ahead
For OpenAI, this bolsters their AI agent push, potentially accelerating consumer-grade solutions and addressing barriers like setup complexity and security. It positions them in the “personal agent race” against Meta, emphasizing multi-agent systems. The broader AI agents market could reach $180 billion by 2033, driving undisclosed but likely substantial financial terms.
OpenClaw benefits from foundation status (akin to the Linux Foundation), ensuring independence and community focus with OpenAI’s sponsorship.
However, risks loom large. OpenClaw’s “unfettered access” to devices raises security concerns, including data breaches and rogue actions—like one incident of spamming hundreds of iMessages. China’s industry ministry warned of cyberattack vulnerabilities if misconfigured. Steinberger aims to prioritize safety and accessibility.
Community Pulse: Excitement, Skepticism, and Satire
Reactions on X blend hype and caution. Cointelegraph noted the move as a “big move” for ecosystems. One user called it the “birth of the agent era,” while another satirically predicted a shift to “ClosedClaw.” Fears of closure persist, but congratulations abound, with some viewing Anthropic’s trademark push as a “fumble.”
LinkedIn’s Reyhan Merekar praised Steinberger’s solo feat: “Literally coding alone at odd hours… Faster than React, Linux, and Kubernetes combined.”
Beyond the Headlines: Vision and Value
Steinberger’s core vision: Agents for all, even non-tech users, with emphasis on safety, cutting-edge models, and impact over empire-building. OpenClaw’s strengths—model-agnostic design, delegation-focused UX, and persistent memory—eluded even well-funded labs.
As of February 15, 2026, this marks a pivotal moment in AI’s evolution, blending open innovation with corporate muscle. No further updates have emerged, but the multi-agent future Altman envisions is accelerating.
Anthropic CEO Dario Amodei joined Dwarkesh Patel for a high-stakes deep dive into the endgame of the AI exponential. Amodei predicts that by 2026 or 2027, we will reach a “country of geniuses in a data center”—AI systems capable of Nobel Prize-level intellectual work across all digital domains. While technical scaling remains remarkably smooth, Amodei warns that the real-world friction of economic diffusion and the ruinous financial risks of $100 billion training clusters are now the primary bottlenecks to total global transformation.
Key Takeaways
The Big Blob Hypothesis: Intelligence is an emergent property of scaling compute, data, and broad distribution; specific algorithmic “cleverness” is often just a temporary workaround for lack of scale.
AGI is a 2026-2027 Event: Amodei is 90% certain we reach genius-level AGI by 2035, with a strong “hunch” that the technical threshold for a “country of geniuses” arrives in the next 12-24 months.
Software Engineering is the First Domino: Within 6-12 months, models will likely perform end-to-end software engineering tasks, shifting human engineers from “writers” to “editors” and strategic directors.
The $100 Billion Gamble: AI labs are entering a “Cournot equilibrium” where massive capital requirements create a high barrier to entry. Being off by just one year in revenue growth projections can lead to company-wide bankruptcy.
Economic Diffusion Lag: Even after AGI-level capabilities exist in the lab, real-world adoption (curing diseases, legal integration) will take years due to regulatory “jamming” and organizational change management.
Detailed Summary: Scaling, Risk, and the Post-Labor Economy
The Three Laws of Scaling
Amodei revisits his foundational “Big Blob of Compute” hypothesis, asserting that intelligence scales predictably when compute and data are scaled in proportion—a process he likens to a chemical reaction. He notes a shift from pure pre-training scaling to a new regime of Reinforcement Learning (RL) and Test-Time Scaling. These allow models to “think” longer at inference time, unlocking reasoning capabilities that pre-training alone could not achieve. Crucially, these new scaling laws appear just as smooth and predictable as the ones that preceded them.
The “Country of Geniuses” and the End of Code
A recurring theme is the imminent automation of software engineering. Amodei predicts that AI will soon handle end-to-end SWE tasks, including setting technical direction and managing environments. He argues that because AI can ingest a million-line codebase into its context window in seconds, it bypasses the months of “on-the-job” learning required by human engineers. This “country of geniuses” will operate at 10-100x human speed, potentially compressing a century of biological and technical progress into a single decade—a concept he calls the “Compressed 21st Century.”
Financial Models and Ruinous Risk
The economics of building the first AGI are terrifying. Anthropic’s revenue has scaled 10x annually (zero to $10 billion in three years), but labs are trapped in a cycle of spending every dollar on the next, larger cluster. Amodei explains that building a $100 billion data center requires a 2-year lead time; if demand growth slows from 10x to 5x during that window, the lab collapses. This financial pressure forces a “soft takeoff” where labs must remain profitable on current models to fund the next leap.
Governance and the Authoritarian Threat
Amodei expresses deep concern over “offense-dominant” AI, where a single misaligned model could cause catastrophic damage. He advocates for “AI Constitutions”—teaching models principles like “honesty” and “harm avoidance” rather than rigid rules—to allow for better generalization. Geopolitically, he supports aggressive chip export controls, arguing that democratic nations must hold the “stronger hand” during the inevitable post-AI world order negotiations to prevent a global “totalitarian nightmare.”
Final Thoughts: The Intelligence Overhang
The most chilling takeaway from this interview is the concept of the Intelligence Overhang: the gap between what AI can do in a lab and what the economy is prepared to absorb. Amodei suggests that while the “silicon geniuses” will arrive shortly, our institutions—the FDA, the legal system, and corporate procurement—are “jammed.” We are heading into a world of radical “biological freedom” and the potential cure for most diseases, yet we may be stuck in a decade-long regulatory bottleneck while the “country of geniuses” sits idle in their data centers. The winner of the next era won’t just be the lab with the most FLOPs, but the society that can most rapidly retool its institutions to survive its own technological adolescence.
In the history of open-source software, few projects have exploded with the velocity, chaos, and sheer “weirdness” of OpenClaw. What began as a one-hour prototype by a developer frustrated with existing AI tools has morphed into the fastest-growing repository in GitHub history, amassing over 180,000 stars in a matter of months.
But OpenClaw isn’t just a tool; it is a cultural moment. It’s a story about “Space Lobsters,” trademark wars with billion-dollar labs, the death of traditional apps, and a fundamental shift in what it means to be a programmer. In a marathon conversation on the Lex Fridman Podcast, creator Peter Steinberger pulled back the curtain on the “Age of the Lobster.”
Here is the definitive deep dive into the viral AI agent that is rewriting the rules of software.
The TL;DW (Too Long; Didn’t Watch)
The “Magic” Moment: OpenClaw started as a simple WhatsApp-to-CLI bridge. It went viral when the agent—without being coded to do so—figured out how to process an audio file by inspecting headers, converting it with ffmpeg, and transcribing it via API, all autonomously.
Agentic Engineering > Vibe Coding: Steinberger rejects the term “vibe coding” as a slur. He practices “Agentic Engineering”—a method of empathizing with the AI, treating it like a junior developer who lacks context but has infinite potential.
The “Molt” Wars: The project survived a brutal trademark dispute with Anthropic (creators of Claude). During a forced rename to “MoltBot,” crypto scammers sniped Steinberger’s domains and usernames in seconds, serving malware to users. This led to a “Manhattan Project” style secret operation to rebrand as OpenClaw.
The End of the App Economy: Steinberger predicts 80% of apps will disappear. Why use a calendar app or a food delivery GUI when your agent can just “do it” via API or browser automation? Apps will devolve into “slow APIs”.
Self-Modifying Code: OpenClaw can rewrite its own source code to fix bugs or add features, a concept Steinberger calls “self-introspection.”
The Origin: Prompting a Revolution into Existence
The story of OpenClaw is one of frustration. In late 2025, Steinberger wanted a personal assistant that could actually do things—not just chat, but interact with his files, his calendar, and his life. When he realized the big AI labs weren’t building it fast enough, he decided to “prompt it into existence”.
The One-Hour Prototype
The first version was built in a single hour. It was a “thin line” connecting WhatsApp to a Command Line Interface (CLI) running on his machine.
“I sent it a message, and a typing indicator appeared. I didn’t build that… I literally went, ‘How the f*** did he do that?’”
The agent had received an audio file (an opus file with no extension). Instead of crashing, it analyzed the file header, realized it needed `ffmpeg`, found it wasn’t installed, used `curl` to send it to OpenAI’s Whisper API, and replied to Peter. It did all this autonomously. That was the spark that proved this wasn’t just a chatbot—it was an agent with problem-solving capabilities.
The Philosophy of the Lobster: Why OpenClaw Won
In a sea of corporate, sanitized AI tools, OpenClaw won because it was weird.
Peter intentionally infused the project with “soul.” While tools like GitHub Copilot or ChatGPT are designed to be helpful but sterile, OpenClaw (originally “Claude’s,” a play on “Claws”) was designed to be a “Space Lobster in a TARDIS”.
The soul.md File
At the heart of OpenClaw’s personality is a file called soul.md. This is the agent’s constitution. Unlike Anthropic’s “Constitutional AI,” which is hidden, OpenClaw’s soul is modifiable. It even wrote its own existential disclaimer:
“I don’t remember previous sessions… If you’re reading this in a future session, hello. I wrote this, but I won’t remember writing it. It’s okay. The words are still mine.”
This mix of high-utility code and “high-art slop” created a cult following. It wasn’t just software; it was a character.
The “Molt” Saga: A Trademark War & Crypto Snipers
The projects massive success drew the attention of Anthropic, the creators of the “Claude” model. They politely requested a name change to avoid confusion. What should have been a simple rebrand turned into a cybersecurity nightmare.
The 5-Second Snipe
Peter attempted to rename the project to “MoltBot.” He had two browser windows open to execute the switch. In the five seconds it took to move his mouse from one window to another, crypto scammers “sniped” the account name.
Suddenly, the official repo was serving malware and promoting scam tokens. “Everything that could go wrong, did go wrong,” Steinberger recalled. The scammers even sniped the NPM package in the minute it took to upload the new version.
The Manhattan Project
To fix this, Peter had to go dark. He planned the rename to “OpenClaw” like a military operation. He set up a “war room,” created decoy names to throw off the snipers, and coordinated with contacts at GitHub and X (Twitter) to ensure the switch was atomic. He even called Sam Altman personally to check if “OpenClaw” would cause issues with OpenAI (it didn’t).
Agentic Engineering vs. “Vibe Coding”
Steinberger offers a crucial distinction for developers entering this new era. He rejects the term “vibe coding” (coding by feel without understanding) and proposes Agentic Engineering.
The Empathy Gap
Successful Agentic Engineering requires empathy for the model.
Tabula Rasa: The agent starts every session with zero context. It doesn’t know your architecture or your variable names.
The Junior Dev Analogy: You must guide it like a talented junior developer. Point it to the right files. Don’t expect it to know the whole codebase instantly.
Self-Correction: Peter often asks the agent, “Now that you built it, what would you refactor?” The agent, having “felt” the pain of the build, often identifies optimizations it couldn’t see at the start.
Codex (German) vs. Opus (American)
Peter dropped a hilarious but accurate analogy for the two leading models:
Claude Opus 4.6: The “American” colleague. Charismatic, eager to please, says “You’re absolutely right!” too often, and is great for roleplay and creative tasks.
GPT-5.3 Codex: The “German” engineer. Dry, sits in the corner, doesn’t talk much, reads a lot of documentation, but gets the job done reliably without the fluff.
The End of Apps & The Future of Software
Perhaps the most disruptive insight from the interview is Steinberger’s view on the app economy.
“Why do I need a UI?”
He argues that 80% of apps will disappear. If an agent has access to your location, your health data, and your preferences, why do you need to open MyFitnessPal? The agent can just log your calories based on where you ate. Why open Uber Eats? Just tell the agent “Get me lunch.”
Apps that try to block agents (like X/Twitter clipping API access) are fighting a losing battle. “If I can access it in the browser, it’s an API. It’s just a slow API,” Peter notes. OpenClaw uses tools like Playwright to simply click “I am not a robot” buttons and scrape the data it needs, regardless of developer intent.
Thoughts: The “Mourning” of the Craft
Steinberger touched on a poignant topic for developers: the grief of losing the craft of coding. For decades, programmers have derived identity from their ability to write syntax. As AI takes over the implementation, that identity is under threat.
But Peter frames this not as an end, but an evolution. We are moving from “programmers” to “builders.” The barrier to entry has collapsed. The bottleneck is no longer your ability to write Rust or C++; it is your ability to imagine a system and guide an agent to build it. We are entering the age of the System Architect, where one person can do the work of a ten-person team.
OpenClaw is not just a tool; it is the first true operating system for this new reality.
Ben Thompson, the author of Stratechery and widely considered the internet’s premier tech analyst, recently joined John Collison for a wide-ranging discussion on the Stripe YouTube channel. The conversation serves as a masterclass on the mechanics of the internet economy, covering everything from why Taiwan is the “most convenient place to live” to the existential threat facing seat-based SaaS pricing.
Thompson, known for his Aggregation Theory, offers a contrarian defense of advertising, a grim prediction for chip supply in 2029, and a nuanced take on why independent media bundles (like Substack) rarely work for the top tier.
TL;DW (Too Long; Didn’t Watch)
The Core Thesis: The tech industry is undergoing a structural reset. Public markets are right to devalue SaaS companies that rely on seat-based pricing in an AI world. Meanwhile, the “AI Revolution” is heading toward a hardware cliff: TSMC is too risk-averse to build enough capacity for 2029, meaning Hyperscalers (Amazon, Google, Microsoft) must effectively subsidize Intel or Samsung to create economic insurance. Finally, the best business model for AI isn’t subscriptions or search ads—it’s Meta-style “discovery” advertising that anticipates user needs before they ask.
Key Takeaways
Ads are a Public Good: Thompson argues that advertising is the only mechanism that allows the world’s poorest users to access the same elite tools (Search, Social, AI) as the world’s richest.
Intent vs. Discovery: Putting banner ads in an AI chat (Intent) is a terrible user experience. Using AI to build a profile and show you things you didn’t know you wanted (Discovery/Meta style) is the holy grail.
The SaaS “Correction”: The market isn’t canceling software; it’s canceling the “infinite headcount growth” assumption. AI reduces the need for junior seats, crushing the traditional per-seat pricing model.
The TSMC Risk: TSMC operates on a depreciation-heavy model and will not overbuild capacity without guarantees. This creates a looming shortage. Hyperscalers must fund a competitor (Intel/Samsung) not for geopolitics, but for capacity assurance.
The Media Pond Theory: The internet allows for millions of niche “ponds.” You don’t want to be a small fish in the ocean; you want to be the biggest fish in your own pond.
Stripe Feedback: In a candid moment, Thompson critiques Stripe’s ACH implementation, noting that if a team add-on fails, the entire plan gets canceled—a specific pain point for B2B users.
Detailed Summary
1. The Geography of Convenience: Why Taiwan Wins
The conversation begins with Thompson’s adopted home, Taiwan. He describes it as the “most convenient place to live” on Earth, largely due to mixed-use urban planning where residential towers sit atop commercial first floors. Unlike Japan, where navigation can be difficult for non-speakers, or San Francisco, where the restaurant economy is struggling, Taiwan represents the pinnacle of the “Uber Eats” economy.
Thompson notes that while the buildings may look dilapidated on the outside (a known aesthetic quirk of Taipei), the interiors are palatial. He argues that Taiwan is arguably the greatest food delivery market in history, though this efficiency has a downside: many physical restaurants are converting into “ghost kitchens,” reducing the vibrancy of street life.
2. Aggregation Theory and the AI Ad Model
The most controversial part of Thompson’s analysis is his defense of advertising. While Silicon Valley engineers often view ads as a tax on the user experience, Thompson views them as the engine of consumer surplus. He distinguishes between two very different types of advertising for the AI era:
The “Search” Model (Google/Amazon): This captures intent. You search for a winter jacket; you get an ad for a winter jacket. Thompson argues this is bad for AI Chatbots because it feels like a conflict of interest. If you ask ChatGPT for an answer, and it serves you a sponsored link, you trust the answer less.
The “Discovery” Model (Meta/Instagram): This creates demand. The algorithm knows you so well that it shows you a winter jacket in October before you realize you need one.
The Opportunity: Thompson suggests that Google’s best play is not to put ads inside Gemini, but to use Gemini usage data to build a deeper profile of the user, which they can then monetize across YouTube and the open web. The “perfect” AI ad doesn’t look like an ad; it looks like a helpful suggestion based on deep, anticipatory profiling.
3. The “End” of SaaS and Seat-Based Pricing
Is SaaS canceled? Thompson argues that the public markets are correctly identifying a structural weakness in the SaaS business model: Headcount correlation.
For the last decade, SaaS valuations were driven by the assumption that companies would grow indefinitely, hiring more people and buying more “seats.” AI disrupts this.
“If an agent can do the work, you don’t need the seat. And if you don’t need the seat, the revenue contraction for companies like Salesforce or Box could be significant.”
The “Systems of Record” (databases, HR/Workday) are safe because they are hard to rip out. But “Systems of Engagement” that charge per user are facing a deflationary crisis. Thompson posits that the future is likely usage-based or outcome-based pricing, not seat-based.
4. The TSMC Bottleneck (The “Break”)
Perhaps the most critical macroeconomic insight of the interview is what Thompson calls the “TSMC Break.”
Logic chip manufacturing (unlike memory chips) is not a commodity market; it’s a monopoly run by TSMC. Because building a fab costs billions in upfront capital depreciation, TSMC is financially conservative. They will not build a factory unless the capacity is pre-sold or guaranteed. They refuse to hold the bag on risk.
The Prediction: Thompson forecasts a massive chip shortage around 2029. The current AI boom demands exponential compute, but TSMC is only increasing CapEx incrementally.
The Solution: The Hyperscalers (Microsoft, Amazon, Google) are currently giving all their money to TSMC, effectively funding a monopoly that is bottlenecking them. Thompson argues they must aggressively subsidize Intel or Samsung to build viable alternative fabs. This isn’t about “patriotism” or “China invading Taiwan”—it is about economic survival. They need to pay for capacity insurance now to avoid a revenue ceiling later.
5. Media Bundles and the “Pond” Theory
Thompson reflects on the success of Stratechery, which was the pioneer of the paid newsletter model. He utilizes the “Pond” analogy:
“You don’t want to be in the ocean with Bill Simmons. You want to dig your own pond and be the biggest fish in it.”
He discusses why “bundling” writers (like a Substack Bundle) is theoretically optimal but practically impossible.
The Bundle Paradox: Bundles work best when there are few suppliers (e.g., Spotify negotiating with 4 music labels). But in the newsletter economy, the “Whales” (top writers) make more money going independent than they would in a bundle. Therefore, a bundle only attracts “Minnows” (writers with no audience), making the bundle unattractive to consumers.
Rapid Fire Thoughts & “Hot Takes”
Apple Vision Pro: A failure of imagination. Thompson critiques Apple for using 2D television production techniques (camera cuts) in a 3D immersive environment. “Just let me sit courtside.”
iPhone Air: Thompson claims the new slim form factor is the “greatest smartphone ever made” because it disappears into the pocket, marking a return to utility over spec-bloat.
Tik Tok: The issue was never user data (which is boring vector numbers); the issue was always algorithm control. The US failed to secure control of the algorithm in the divestiture talks, which Thompson views as a disaster.
Crypto: He remains a “crypto defender” because, in an age of infinite AI-generated content, cryptographic proof of authenticity and digital scarcity becomes more valuable, not less.
Work/Life Balance: Thompson attributes his success to doubling down on strengths (writing/analysis) and aggressively outsourcing weaknesses (he has an assistant manage his “Getting Things Done” file because he is incapable of doing it himself).
Thoughts and Analysis
This interview highlights why Ben Thompson remains the “analyst’s analyst.” While the broader market is obsessed with the capabilities of AI models (can it write code? can it make art?), Thompson is focused entirely on the value chain.
His insight on the Ad-Funded AI future is particularly sticky. We are currently in a “skeuomorphic” phase of AI, trying to shoehorn chatbots into search engine business models. Thompson’s vision—that AI will eventually know you well enough to skip the search bar entirely and simply fulfill desires—is both utopian and dystopian. It suggests that the privacy wars of the 2010s were just the warm-up act for the AI profiling of the 2030s.
Furthermore, the TSMC warning should be a flashing red light for investors. If the physical layer of compute cannot scale to meet the software demand due to corporate risk aversion, the “AI Bubble” might burst not because the tech doesn’t work, but because we physically cannot manufacture the chips to run it at scale.
In a recent episode of the Out of Office podcast, Lightspeed partner Michael Mignano sat down with Nikita Bier, the Head of Product at X (formerly Twitter). Filmed in Bier’s hometown of Redondo Beach, California, the interview offers a rare, candid look into the chaotic, high-stakes world of running product at one of the world’s most influential platforms.
Bier, famous for founding the viral apps TBH and Gas, discusses everything from his unorthodox hiring by Elon Musk to the specific growth hacks being used to revitalize a 20-year-old platform. Here is a breakdown of the conversation.
TL;DW (Too Long; Didn’t Watch)
The Hire: Elon Musk hired Nikita via DM. The “interview” was a 48-hour sprint to redesign the app’s onboarding flow, which Nikita presented to Elon at 2:00 AM.
The Role: Bier describes his job as “customer support for 500 million people” and admits he acts as the company mascot/punching bag.
The Culture: X runs like a seed-stage startup. There are roughly 30 core product engineers, very few managers, and a flat hierarchy.
Growth Strategy: The team is focusing on “Starter Packs” to help new users find niche communities (like Peruvian politics or plumbing) rather than just general tech/news content.
Elon’s Management: Musk is deeply involved in engineering reviews and consistently pushes the team to “do the hard thing” rather than take shortcuts for quick growth.
Key Takeaways
1. Think Like an Adversary
Bier credits his early days as a “script kiddie” hacking AOL and building phishing sites (for educational purposes, mostly) as the foundation for his product sense. He argues that understanding how to break a system is essential for building consumer products. This “adversarial” mindset helps in preventing spam, but it is also the secret to growth—understanding exactly how funnels work and how to optimize them to the extreme.
2. The “Build in Public” Double-Edged Sword
Nikita is a prolific poster on X, often testing feature ideas in real-time. This creates an incredibly tight feedback loop where bugs are reported seconds after launch. However, it also makes him a target. He recounted the “Crypto Twitter” incident where a critique of “GM” (Good Morning) posts led to him being meme-d as a pig for a week. The sentiment only flipped when X shipped useful features like anti-spam measures and financial charts.
3. Fixing the Link Problem
One of the biggest recent product changes involved how X handles external links. Historically, social platforms downrank links to keep users on-site. Bier helped design a new UI where the engagement buttons (Like, Repost) remain visible while the user reads the article in the in-app browser. This allows X to capture engagement signals on external content, meaning the algorithm can finally properly rank high-quality news and articles without penalizing creators.
4. Identity and Verification
To combat political misinformation without compromising free speech, X launched “Country of Origin” labels. Bier explained that this allows users to see if a political opinion is coming from a local citizen or a “grifter” farm in a different country, providing context rather than censorship.
Detailed Summary
From TBH to X
The interview traces Bier’s history of building viral hits. He famously sold his app TBH (a positive polling app for teens) to Facebook, and years later, built Gas (effectively the same concept) and sold it to Discord. He dispelled the myth that he simply “sold the same app twice,” noting that while the mechanics were similar, the growth engines and social graph integrations had to be completely reinvented for a new generation.
The Musk Methodology
Bier provides a fascinating look at Elon Musk’s leadership style. Contrary to the idea of a distant executive, Musk conducts weekly reviews with engineers where they present their code and progress directly. Bier noted that Musk has a high tolerance for pain if it means long-term stability. For example, rewriting the entire recommendation algorithm or moving data centers in mere months—projects that would take years at Meta or Google—were executed rapidly because Musk insisted on “doing the hard thing.”
Reviving a 20-Year-Old Platform
The core challenge at X is growth. The app has billions of dormant accounts. Bier’s strategy relies on “resurrection”—bringing old users back by showing them that X isn’t just for news, but for specific interests. This led to the creation of Starter Packs, which curate lists of accounts for specific niches. The result has been a doubling of time spent for new users.
The Financial Future
Bier teased upcoming features that align with Musk’s vision of an “everything app.” This includes Smart Cashtags, which allow users to pull up real-time financial data and charts within the timeline. The long-term goal is to enable transactions directly on the platform, allowing users to buy products or tip creators seamlessly.
Thoughts
What stands out most in this interview is the sheer precariousness of Nikita Bier’s position. He is attempting to apply “growth hacking” principles—usually reserved for fresh, nimble startups—to a massive, entrenched legacy platform. The fact that the core engineering team is only around 30 people is staggering when compared to the thousands of engineers at Meta or TikTok.
Bier represents a new breed of product executive: the “poster-operator.” He doesn’t hide behind corporate comms; he engages in the muddy waters of the platform he builds. While this invites toxicity (and the occasional death threat, which he mentions casually), it affords X a speed of iteration that is unmatched in the industry. If X succeeds in revitalizing its growth, it will likely be because they treated the platform not as a museum of the internet, but as a product that still needs to find product-market fit every single day.
Yesterday Anthropic dropped Claude Opus 4.6 and with it a research-preview feature called Agent Teams inside Claude Code.
In plain English: you can now spin up several independent Claude instances that work on the same project at the same time, talk to each other directly, divide up the work, and coordinate without you having to babysit every step. It’s like giving your codebase its own little engineering squad.
1. What You Need First
Claude Code installed (the terminal app: claude command)
A Pro, Max, Team, or Enterprise plan
Expect higher token usage – each teammate is a full separate Claude session
export CLAUDE_CODE_EXPERIMENTAL_AGENT_TEAMS=1
claude
3. Start Your First Team (easiest way)
Just type in Claude Code:
Create an agent team to review PR #142.
Spawn three reviewers:
- One focused on security
- One on performance
- One on test coverage
4. Two Ways to See What’s Happening
A. In-process mode (default) – all teammates appear in one terminal. Use Shift + Up/Down to switch.
B. Split-pane mode (highly recommended)
{
"teammateMode": "tmux" // or "iTerm2"
}
Here’s exactly what it looks like in real life:
Claude Code with multiple agents running in parallel (subagents/team view)tmux split-pane mode showing several Claude teammates working simultaneously
5. Useful Commands You’ll Actually Use
Shift + Tab → Delegate mode (lead only coordinates)
Ctrl + T → Toggle shared task list
Shift + Up/Down → Switch teammate
Type to any teammate directly
6. Real-World Examples That Work Great
Parallel code review (security + perf + tests)
Bug hunt with competing theories
New feature across frontend/backend/tests
7. Best Practices & Gotchas
Use only for parallel work
Give teammates clear, self-contained tasks
Always run Clean up the team when finished
Bottom Line
Agent Teams turns Claude Code from a super-smart solo coder into a coordinated team of coders that can actually debate, divide labor, and synthesize results on their own.
Try it today on a code review or a stubborn bug — the difference is immediately obvious.