PJFP.com

Pursuit of Joy, Fulfillment, and Purpose

  • Elad Gil on the AI Frontier: Compute Constraints, the Personal IPO, and Why Most AI Founders Should Sell in the Next 12 to 18 Months

    Elad Gil sat down with Tim Ferriss for a wide ranging conversation that pairs almost perfectly with his recent Substack post Random thoughts while gazing at the misty AI Frontier. Together, the podcast and the post lay out the cleanest framework I have seen for what is actually happening in AI right now: a Korean memory bottleneck capping every lab, a class wide personal IPO across the research community, the fastest revenue ramps in capitalist history, and a brutal dot com style culling that most founders do not yet want to admit is coming. Below is a complete breakdown.

    TLDW (Too Long, Didn’t Watch)

    Elad Gil argues that AI is producing the fastest revenue ramps in capitalist history while setting up the same brutal power law that wiped out 99 percent of dot com companies. OpenAI and Anthropic each sit at roughly 0.1 percent of US GDP today, on a path to 1 percent of GDP run rate by end of 2026, which is insanely fast by any historical standard. The current ceiling on capabilities is not chips but Korean high bandwidth memory, and that constraint will likely hold all major labs roughly comparable in capability through 2028. Talent has just experienced a class wide personal IPO via Meta led bidding, with packages running tens to hundreds of millions per researcher. Most AI companies should consider exiting in the next 12 to 18 months while the tide is high. Right now consensus is correct. Save the contrarianism for later.

    Key Takeaways

    • OpenAI and Anthropic are each at roughly 0.1 percent of US GDP. With US GDP near 30 trillion dollars and each lab at a roughly 30 billion dollar revenue run rate, AI has gone from essentially zero to 0.25 to 0.5 percent of GDP in just a few years. If the labs hit 100 billion in run rate by year end 2026 (which many expect), AI hits 1 percent of GDP run rate inside a single year.
    • The AI personal IPO is real. 50 to a few hundred AI researchers across multiple companies just experienced a class wide IPO event due to Meta led bidding, with top packages reportedly tens to hundreds of millions per person. The closest historical analog is early crypto holders around 2017.
    • The bottleneck is Korean memory, not Nvidia chips. High bandwidth memory from Hynix, Samsung, Micron, and others is the binding constraint. Expected to hold roughly two years. After that, power and data center buildout become the next walls.
    • No lab can pull dramatically ahead before 2028. Because every lab is compute constrained on the same input, OpenAI, Anthropic, Google, xAI, and Meta should remain roughly comparable in capability through that window, absent an algorithmic breakthrough that stays inside one lab.
    • Compute is the new currency. Token budgets now define what an engineer can accomplish, what a company can spend, and what business models are viable. Some companies (neoclouds, Cursor) are effectively inference providers disguised as tools.
    • The dot com base rate is the AI base rate. Around 1,500 to 2,000 companies went public in the late 1990s internet cycle. A dozen or two survived. AI will likely look the same.
    • Most AI founders should consider selling in the next 12 to 18 months. If you are not in the durable handful, this is your value maximizing window. A handful of companies (OpenAI, Anthropic) should never sell.
    • Buyers are bigger than ever. One percent of a 3 trillion dollar market cap is 30 billion dollars. That math makes massive AI acquisitions trivial for hyperscalers, vertical incumbents, and adjacent giants.
    • Underrated exit path: merger of equals. Two private AI competitors destroying each other on price should consider just merging. PayPal and X.com did exactly this in the 1990s.
    • 91 percent of global AI private market cap sits in a 10 by 10 mile square. If you want to do AI, move to the Bay Area. Remote work for cluster industries is BS.
    • Want money? Ask for advice. Want advice? Ask for money. The inverse also works: offering useful advice frequently leads to inbound investment opportunities.
    • AI is selling units of labor, not software. The shift is from selling seats and tools to selling cognitive output. This is why Harvey can win in legal, where decades of legal SaaS failed.
    • AI eats closed loops first. Tasks that can be turned into testable closed loop systems (code, AI research) get automated fastest. Map jobs on a 2×2 of closed loop tightness vs economic value to see where AI hits soonest.
    • Headcount will flatten at later stage companies. Multiple late stage CEOs told Elad they will not do big AI layoffs but will simply stop growing headcount even as revenue grows 30 to 100 percent. Hidden layoffs are also hitting outsourcing firms in India and the Philippines first.
    • The Slop Age could be the golden era of AI plus humanity. AI produces useful slop at volume, humans desloppify it, leverage is high, and the work is fun. This window may close as AI gets superhuman.
    • Market first, team second (90 percent of the time). Great teams die in bad markets. The exception is when you meet someone truly exceptional at the very earliest stage.
    • The one belief framework. If your investment memo needs three core beliefs to be true, it is too complicated. Coinbase was an index on crypto. Stripe was an index on e-commerce. That was the entire memo.
    • The four year vest is a relic. It exists because in the 1970s companies actually went public in four years. Today the private window has stretched to 20 years and venture has eaten what used to be public market growth investing.
    • Boards are in-laws. You cannot fire investor board members. Take a worse price for a better board member, because as Naval Ravikant said, valuation is temporary, control is forever.
    • Right now, consensus is correct. Save the contrarianism. The smart move is to just buy more AI exposure rather than try to outsmart the obvious.
    • Distribution wins more than founders admit. Google paid hundreds of millions to push the toolbar. Facebook bought ads on people’s own names in Europe. TikTok spent billions on user acquisition. Allbirds (yes, the shoe company) just raised a convert to build a GPU farm.
    • Anti-AI sentiment will get worse before it gets better. Maine banned new data centers. There has been violence directed at AI leaders. Expect more political and activist backlash, especially as AI is blamed for harms it has not yet caused while its benefits are mismeasured.
    • Use AI as a cold reader. Elad uploads photos of founders to AI models with cold reading prompts and reports surprisingly accurate personality assessments based on micro features.

    Detailed Summary

    The Numbers Are Insane and Mostly Underappreciated

    The most stunning data point in either source is the GDP math. US GDP is roughly 30 trillion dollars. OpenAI and Anthropic are each rumored to be at roughly 30 billion dollars in revenue run rate, putting each one at 0.1 percent of US GDP. Add cloud AI revenue and the picture gets stranger: AI has grown from essentially zero to between 0.25 and 0.5 percent of GDP in only a few years. If the labs hit 100 billion in run rate by year end 2026, AI will be at roughly 1 percent of GDP run rate inside a single year. There is no historical analog for that pace. Elad notes that productivity gains from AI may end up mismeasured the way internet productivity was undercounted in the 2000s, which would have downstream consequences for regulation: AI gets blamed for the bad (job losses) and credited for none of the good (new jobs, education gains, healthcare improvements). His half joking aside is that the real ASI test may be the ability to actually measure AI’s economic impact.

    The AI Personal IPO

    The most underdiscussed phenomenon in AI right now, according to Elad, is what he calls a class wide personal IPO. When a company IPOs, a subset of employees become wealthy, lose focus, and either start companies, get into politics, fund passion projects, or check out. Meta started aggressively bidding for AI talent. Other major labs had to match. The result was 50 to a few hundred researchers, scattered across multiple labs, suddenly receiving compensation in the tens to hundreds of millions of dollars range. The only historical analog Elad can think of is early crypto holders around 2017. Some chunk of these newly wealthy researchers will redirect attention to AI for science, side projects, or quiet quitting. The aggregate field stays mission aligned, but the distribution of attention has shifted.

    The Korean Memory Bottleneck

    Every major AI lab today is building giant Nvidia clusters paired with high bandwidth memory primarily from Korean fabs and a few other suppliers. They run massive amounts of data through these clusters for months, and the output is, almost absurdly, a single flat file containing what amounts to a compressed version of human knowledge plus reasoning. Right now, the binding constraint on this whole stack is HBM memory from Hynix, Samsung, Micron, and others. Korean memory fab capacity has been below the capacity of every other piece of the system. Elad estimates this constraint persists for roughly two years. After that, the next walls are likely data center construction and power. The strategic implication is enormous. While memory constrains everyone, no single lab can buy 10x the compute of its rivals, so capabilities should stay roughly comparable across the major labs. Once that constraint lifts, possibly around 2028, one player could theoretically pull dramatically ahead, especially if AI assisted AI research closes a self improvement loop inside one lab.

    Compute Is the New Currency

    The blog post sharpens a framing that runs throughout the podcast: compute, denominated in tokens, is now a unit of economic value. Token budgets define what an engineer can accomplish, what a company can spend, and what business models work. Some companies are effectively inference providers wearing tool costumes. Neoclouds are the cleanest example. Cursor is another, subsidizing inference as a user acquisition strategy. The most absurd recent example: Allbirds, the shoe company, raised a convertible to build a GPU farm. Whether this becomes the AI version of Microstrategy’s Bitcoin trade or a cautionary tale, it tells you where the cost of capital believes the next decade is going.

    The Dot Com Survival Math

    Elad walks through the brutal arithmetic that AI founders should be internalizing. In the late 1990s and early 2000s, somewhere between 1,500 and 2,000 internet companies went public. Of those, roughly a dozen or two survived in any meaningful form. Every cycle has looked like this: automotive in the early 1900s, SaaS, mobile, crypto. There is no reason AI will be different. Most current AI companies, including those ramping revenue today, will see the market, competition, and adoption turn on them. The question every AI founder should be asking is whether they are in the durable handful or not.

    Most AI Companies Should Consider Exiting in the Next 12 to 18 Months

    This is the most actionable and most uncomfortable take in either source. While the tide is rising, every AI company looks unstoppable. Whether they actually are, in a 10 year frame, is a separate question. Founders running successful AI companies should take a cold honest look at whether the next 12 to 18 months is their value maximizing window. Companies typically have a 6 to 12 month peak before some headwind hits, often visible in the second derivative of growth. The best signal that you should sell is when growth rate is starting to plateau and you can see why. A handful of companies (OpenAI, Anthropic, the durable winners) should never exit. Many others should, while everything is still on the upswing.

    What Makes an AI Company Durable

    Elad lays out four lenses for evaluating durability at the application layer:

    1. Does your product get dramatically better when the underlying model gets better, in a way that keeps customers loyal?
    2. How deep and broad is the product? Are you building multiple integrated products embedded in actual workflows?
    3. Are you embedded in real change management at the customer? AI adoption is mostly a workflow change problem, not a tech problem. Workflow embedding is durable.
    4. Are you capturing and using proprietary data in a way that creates a system of record? Data moats are often overstated, but sometimes real.

    At the lab layer, Elad believes OpenAI, Anthropic, and Google are durable absent disaster. He predicted three years ago that the foundation model market would settle into an oligopoly aligned with cloud, and that prediction has roughly held.

    Selling Work, Not Software

    The deepest structural insight in the conversation is that generative AI is shifting what software companies sell. The old model was selling seats, tools, and SaaS subscriptions. The new model is selling units of cognitive labor. Zendesk sold seats to support reps. Decagon and Sierra sell agentic support output. Harvey can win in legal even though selling to law firms was historically considered terrible business, because Harvey is not selling tools, it is augmenting lawyer output. This shift opens markets that were previously closed and dramatically grows tech TAMs. It is also why founder limited theories of entrepreneurship currently understate how many opportunities exist.

    AI Eats Closed Loops First

    One of the cleanest mental models in the blog post is the closed loop framework. AI automates first what can be turned into a testable closed loop. Code is the canonical example: outputs can be tested, errors detected, models can iterate. AI research is similar. Both have tight feedback loops and high economic value, which puts them at the top of the AI impact ranking. Map jobs on a 2×2 of closed loop tightness vs economic value and you can see where AI hits soonest. The interesting forward question is which jobs become more closed loop next. Data collection and labeling will keep growing in every field as a result.

    The Harness Matters More Than People Think

    For coding tools and increasingly for enterprise applications, what Elad calls the harness, the wrapper of UX, prompting, workflow integration, and brand around the underlying model, is becoming sticky. It is not just which model you call. It is the environment built around it. Cursor and Windsurf demonstrate this in coding. The interesting open questions are what the harness looks like for sales AI, for AI architects, for analyst workflows. Those gaps leave room for startups even as model capabilities converge.

    Hidden Layoffs and the Developing World

    Most announced AI driven layoffs are probably just COVID era overhiring corrections wrapped in a more flattering narrative. But real AI driven labor displacement is happening, and it is hitting outsourcing firms first. That means countries like India and the Philippines, where many outsourced services jobs sit, are likely to be the most impacted earliest. Several developing economies built their growth ladders on services exports. If AI takes those jobs first, the migration and economic patterns of the next decade may shift in ways nobody is yet planning for.

    The Flat Company

    Multiple late stage CEOs told Elad they will not announce big AI layoffs. Instead, they will simply stop growing headcount. If revenue grows 30 to 100 percent, headcount stays flat or shrinks via attrition. Existing employees become dramatically more productive. The very best people who can leverage AI will see compensation inflate. Sales and some growth engineering keep hiring. Almost everything else flatlines. This is mostly a later stage and public company phenomenon. True early stage startups should still scale aggressively after product market fit, just with more leverage per person.

    Exit Options for AI Founders

    Elad lays out four exit categories. First, the labs and hyperscalers themselves: Apple, Amazon, Google, Microsoft, Meta. Second, vertical incumbents like Thomson Reuters for legal or healthcare giants for clinical AI. Third, the underrated category of merger of equals between two private AI competitors who are currently destroying each other on price. PayPal and X.com did this in the 1990s. Uber and Lyft reportedly almost did. Fourth, large adjacent tech companies: Oracle, Samsung, Tesla, SpaceX, Snowflake, Databricks, Stripe, Coinbase. The market cap math has changed in a way that makes acquisition trivial. One percent of a three trillion dollar market cap is 30 billion dollars, which means a hyperscaler can do massive acquisitions almost casually.

    Geographic Concentration Is Extreme

    Elad’s team analyzed where private market cap aggregates. Historically half of global tech private market cap sat in the US, with half of that in the Bay Area. With AI, 91 percent of global AI private market cap is in a single 10 by 10 mile square in the Bay Area. New York is a distant second and then it falls off a cliff. For defense tech, the cluster is Southern California (SpaceX, Anduril, El Segundo, Irvine). Fintech and crypto skew toward New York. The remote everywhere advice is, Elad says, just BS for anyone trying to break into an industry cluster.

    How Elad Got Into His Best Deals

    Stripe started with Elad cold emailing Patrick Collison after selling an API company to Twitter. A couple of walks later, Patrick texted that he was raising and Elad was in. Airbnb came from helping the founders raise their Series A and being asked at the end if he wanted to invest. Anduril came from noticing that Google had shut down Project Maven and asking if anyone was building defense tech, then meeting Trey Stephens at a Founders Fund lunch. Perplexity came from Aravind Srinivas cold messaging him on LinkedIn while still at OpenAI. Across all of these, the pattern is the same: be in the cluster, be helpful, be talking publicly about technology nobody else is talking about, and be useful to founders before any money is on the table.

    The One Belief Framework

    Investors love complicated 50 page memos. Elad believes the actual decision usually collapses into a single core belief. Coinbase: this is an index on crypto, and crypto will keep growing. Stripe: this is an index on e-commerce, and e-commerce will keep growing. Anduril: AI plus drones plus a cost plus model will be important for defense. If your thesis needs three things to be true, it is probably not going to work. If it needs nothing, you have no thesis.

    Boards as In-Laws

    Elad emphasizes that founders should treat board composition like one of the most important hiring decisions of the company. You cannot fire an investor board member. They have contractual rights. So if you are going to be stuck with someone for a decade, take a worse valuation for a better human. Reid Hoffman’s framing is that the best board member is a co-founder you could not have otherwise hired. Naval Ravikant’s framing is that valuation is temporary but control is forever. Elad recommends writing a job spec for every board seat.

    The Slop Age as a Golden Era

    One of the warmest takes in the blog post is the framing of the current moment as the Slop Age, and the suggestion that this might actually be the golden era of AI plus humanity. Before the last few years, AI was inaccessible and narrow. Eventually AI may become superhuman at most tasks. Today, AI produces useful slop at volume, which means humans are still needed to desloppify the slop, but the leverage on time and ambition is real. That makes the work fun. If AI displaces people or starts doing more interesting work, this golden moment fades. Elad also notes the obvious counter, that the era of human generated internet slop preceded the AI slop era. AGI may end the slop age, or alternately may be the thing that finally cleans up all the prior waves of human slop.

    Anti-AI Regulation and Violence Will Increase

    This is one of the more sobering threads in the blog post. Real world AI driven labor displacement has been small so far, but anti-AI sentiment is already strong and growing. Maine just banned new data centers. There has been actual violence directed at AI leaders, including a recent attack on Sam Altman. Elad’s view is that AI leaders should work harder on optimistic public framing, real political lobbying, and reining in the doom narrative coming from inside the field. Otherwise the regulatory and activist backlash will get much worse, and likely on the basis of mismeasured impacts.

    Right Now Consensus Is Correct

    The headline contrarian take from the episode is that contrarianism right now is wrong. There are moments in time when betting against the crowd pays. This is not one of them. The smart bet is just buying more AI exposure. Trying to find the clever angle, the underlooked hardware play, the secret macro thesis, is overthinking it. Save the contrarian moves for later in the cycle.

    Distribution Almost Always Matters

    Elad pushes back on the founder mythology that great products win on their own. Google paid hundreds of millions of dollars in the early 2000s to distribute its toolbar through every popular app installer on the internet. Facebook bought search ads against people’s own names in European markets to seed network liquidity. TikTok spent billions on user acquisition before its algorithm could lock people in. Snowflake spent enormous sums on enterprise sales and channel partnerships. Sometimes the best product wins. Often the company with the best distribution wins. Founders should plan for both.

    AI as a Cold Reader and a Research Partner

    Two of the more practical AI workflows Elad describes: First, uploading photos of founders to AI models with cold reading prompts that ask the model to identify micro features (crows feet from genuine smiling, brow patterns, posture cues) and infer personality traits, sense of humor, and likely social behavior. He reports the outputs are surprisingly specific. Second, running deep dives across multiple models in parallel (Claude, ChatGPT, Gemini), asking each for primary sources, summary tables, and cross checked data. He recently used this approach to investigate the rise in autism and ADHD diagnoses, concluding that diagnostic criteria shifts and school incentives drive most of it, and noting that maternal age has a stronger statistical association with autism than paternal age, despite paternal age getting all the public discourse.

    The First Ever 10 Year Plan

    For someone who has been compounding aggressively for two decades, Elad has somehow never written a 10 year plan until now. He knows it will not play out as written. The point is that the act of imagining a decade out shifts what you choose to do in the near term. He explicitly rejects the AGI in two years therefore plans are pointless framing as defeatist. There will be interesting things to do regardless of how the AGI timeline plays out.

    Thoughts

    This is one of the more useful AI investor conversations of 2026, mostly because Elad is willing to put numbers and timelines on things that are usually left vague. Pairing the podcast with the underlying Substack post is the right move because the post is where the GDP math, the closed loop framework, and the Slop Age framing actually live. The podcast is where Elad explains how he thinks rather than just what he thinks.

    The 12 to 18 month sell window framing is the most actionable single idea in either source, and probably the most uncomfortable for AI founders sitting on multi billion dollar paper valuations. The math is unforgiving. A dozen winners out of thousands. If you are honest with yourself about whether you are in the dozen, you know what to do.

    The Korean memory bottleneck framing explains a lot of current behavior. The talent wars make more sense once you accept that compute is not going to be the differentiator for two years, so people become the only remaining lever. The convergence of capabilities across OpenAI, Anthropic, Google, and xAI starts to look less like coincidence and more like the structural inevitability of a supply constrained input. The 2028 inflection date is the one to watch.

    Compute as currency is the cleanest reframing in the blog post. Once you start pricing companies in tokens rather than dollars, everything from Cursor’s economics to Allbirds raising a convert to build a GPU farm becomes legible. The interesting question is whether this is a permanent unit of denomination or a transitional one that fades when inference costs collapse.

    The software to labor argument is the structural framing that I think will hold up the longest. Once you internalize that we are not selling seats anymore but selling cognitive output, every vertical that was previously locked behind ugly procurement and IT inertia opens up. Harvey is the proof of concept. There will be 30 more Harveys across every white collar profession.

    The closed loop framework is the cleanest predictor of which jobs get hit hardest and soonest. If you want to know whether your role is exposed, the questions to ask are whether outputs can be machine evaluated, how tight the feedback loop is, and how high the economic value is. The intersection is where AI lands first.

    The geographic concentration data is genuinely shocking. 91 percent of global AI private market cap in a 10 by 10 mile area is the kind of statistic that should make everyone outside that square think very carefully about what game they are playing.

    The Slop Age framing is the most emotionally honest moment in the post. We are in a window where humans still meaningfully add value on top of AI output. That window is finite. Enjoy it.

    The anti-AI backlash thread is the one I think most people in the industry are still underweighting. Maine banning new data centers is a leading indicator, not a one off. The fact that the impacts are likely to be mismeasured by official statistics makes the political dynamics worse, not better. AI will get blamed for harms it did not cause and credited for none of the gains. If the field’s leaders do not start communicating better and lobbying smarter, the regulatory environment in 2028 will be much worse than in 2026.

    Finally, Elad’s first ever 10 year plan stands out as the most quietly important moment in the episode. The implicit message is that even people who have been compounding aggressively for two decades benefit from forcing a longer time horizon onto their thinking. Most plans fail. The act of planning still changes what you do today.

    Read the original Elad Gil post here: Random thoughts while gazing at the misty AI Frontier. Find Elad on X at @eladgil, on his Substack at blog.eladgil.com, and on his website at eladgil.com. Tim Ferriss publishes the full episode at tim.blog/podcast.

  • How GPT-5, Claude, and Gemini Are Actually Trained and Served: The Real Math Behind Frontier AI Infrastructure

    Reiner Pope, CEO of MatX and former TPU architect at Google, sat down with Dwarkesh Patel for a different kind of episode: a chalk-and-blackboard lecture on how frontier LLMs like GPT-5, Claude, and Gemini are actually trained and served. With nothing but a handful of equations and public API prices, Reiner reverse engineers an astonishing amount of what the labs are doing. If you have ever wondered why Fast Mode costs more, why context length stalls around 200k tokens, why models seem 100x over-trained, or why hyperscalers are pouring half a trillion dollars into memory, this is the most lucid explanation on the internet.

    TLDW

    Frontier LLM economics come down to two simple budgets: compute time and memory time. Once you write the rooflines on a blackboard, almost everything else falls out of them. Optimal batch size is roughly 300 times your sparsity ratio (around 2,000 to 3,000 tokens for a DeepSeek-style model). A new batch “train” departs every 20 milliseconds because that is how long it takes to read HBM end to end. Mixture of experts strongly favors staying inside a single rack, which is why scale-up domains went from 8 GPUs (Hopper) to 72 (Blackwell) to 500-plus (Rubin). Pipeline parallelism solves weight capacity but does nothing for KV cache, and adds painful per-hop latency, which is why Ilya famously said pipelining is not wise. Because of reinforcement learning and inference economics, frontier models are roughly 100x over-trained versus Chinchilla optimal, and a well-tuned model should output roughly as many tokens during deployment as went into its pre-training corpus. API prices leak the rest: Gemini’s 50% premium above 200k tokens reveals where KV memory time crosses weight memory time, prefill being 5x cheaper than decode confirms decode is memory bandwidth bound, and cache hit pricing tiers map directly to HBM, DDR, flash, and (yes) spinning disk. The lecture closes on a beautiful detour about the convergent evolution of neural nets and cryptographic ciphers.

    Key Takeaways

    • Two equations explain almost everything. A roofline analysis comparing compute time to memory fetch time predicts cost, latency, and architectural choices with shocking accuracy.
    • Optimal batch size is about 300 times sparsity. For a DeepSeek model that activates 32 of 256 experts, that lands around 2,000 to 3,000 tokens per batch. Real deployments go a bit higher to leave headroom.
    • The 20 millisecond train. A new batch departs every 20ms because that is how long it takes to read all of HBM once. Worst-case queue latency is roughly 40ms.
    • Fast Mode is just smaller batches. Pay 6x more, get 2.5x faster decode by amortizing weights over fewer users. There is a hard latency floor at the HBM read time.
    • Slow Mode would not save much. Once you are past the optimal batch size, the cost-per-token plateau is dominated by compute, not weight fetches. You cannot meaningfully amortize KV cache because it is unique per sequence.
    • One rack is the natural MoE unit. Expert parallelism wants all-to-all communication, which strongly favors the scale-up network (NVLink) over the scale-out network (roughly 8x slower).
    • Bigger scale-up domains drove model scaling. The jump from 8 (Hopper) to 72 (Blackwell) to 500-plus (Rubin) GPUs per rack increased aggregate memory bandwidth by 8x, which is why trillion-plus parameter models only became viable recently.
    • Pipeline parallelism is overrated for inference. It saves on weight memory capacity but does nothing for KV cache memory. It also adds milliseconds of latency per hop in decode.
    • Why Ilya said pipelining is not wise. Architectural constraints (cross-layer residuals like in Kimi) and the inability to amortize weight loads across micro-batches make pipelining a hassle in training too.
    • The memory wall is real and paradoxical. Hyperscalers reportedly spend 50% of CapEx on memory, yet racks have far more HBM than a trillion-parameter model needs. The capacity is there for KV cache and batch size, not for weights.
    • Frontier models are roughly 100x over-trained vs Chinchilla. When you minimize total cost across pre-training plus RL plus inference, smaller models trained on more data win.
    • Each model should output roughly all human knowledge. If you equalize pre-training and inference compute, the total tokens served by a model during its lifetime should approximate its training corpus. Roughly 150 trillion in, 150 trillion out.
    • API pricing reveals architecture. Gemini’s 50% premium above 200k context, the 5x decode-vs-prefill ratio, and cache duration tiers all leak detailed information about KV size, memory bottlenecks, and storage hierarchy.
    • KV cache is roughly 2KB per token. Solving Gemini’s pricing equation gives a plausible 1.6 to 2 kilobytes per token at 100B active parameters and 200k context.
    • Decode is memory bandwidth bound, prefill is compute bound. The 5x price gap is direct evidence.
    • Cache pricing maps to memory tiers. The 5-minute and 1-hour cache durations probably correspond to flash and spinning disk drain times respectively. LLM serving uses spinning disk.
    • Context length is stuck near 200k. Memory bandwidth, not compute, is the binding constraint. Sparse attention gives a square-root improvement but is not infinite.
    • Cryptography and neural nets are mathematical cousins. Both rely on jumbling information across inputs. Feistel ciphers led directly to RevNets (reversible neural networks). Adversarial attacks mirror the cipher avalanche property.

    Detailed Summary

    The Roofline: Compute Time vs Memory Time

    Reiner starts with the simplest possible model of LLM inference. The time to do a forward pass is bounded below by the maximum of compute time and memory fetch time. Compute time is the batch size times active parameters divided by FLOPs. Memory time is total parameters divided by memory bandwidth, plus a KV cache term that scales with batch size and context length. From these two equations, almost every economic and architectural fact about modern LLMs can be derived.

    Plotting cost per token against batch size gives a clean picture: at low batch you pay enormous overhead because you cannot amortize the weight fetches, and at high batch you hit a compute floor. There is a sweet spot where memory bandwidth time equals compute time. That sweet spot is what Fast Mode and Slow Mode are tuning around.

    Why Fast Mode Costs More: The Batch Trade-Off

    When Claude Code or Codex offers Fast Mode at 6x the price for 2.5x the speed, what is really happening is that they are running you at a smaller batch size. Smaller batch means weight loads are amortized over fewer users, so cost per token goes up. But latency goes down because each forward pass touches less data. There is a hard floor on latency because you have to read every byte of HBM at least once per token, and that takes about 20 milliseconds on Blackwell-class hardware. There is also a soft ceiling on Slow Mode savings because the unamortizable parts (KV cache fetches, compute) eventually dominate.

    The 20 Millisecond Train

    HBM capacity divided by HBM bandwidth lands consistently around 20 milliseconds across generations of Nvidia hardware. That is the natural cadence at which a frontier model can run a forward pass over all its weights. Reiner uses a memorable analogy: a train departs every 20 milliseconds. Any users whose requests are ready board the train. If the train is full, they wait. If it is empty, it leaves anyway. This is why you do not need millions of concurrent users to saturate a model’s batch. You only need enough to fill a 2,000-token train every 20ms.

    Why Optimal Batch Size Is About 300 Times Sparsity

    Setting compute time equal to weight fetch time and rearranging gives a beautiful result: batch size needs to be greater than (FLOPs / memory bandwidth) times (total params / active params). The hardware ratio is a dimensionless 300 on most GPUs and has stayed remarkably stable from A100 through Hopper, Blackwell, and Rubin. The model term is just the sparsity ratio. For DeepSeek with 32 of 256 experts active, that is 8. So optimal batch is around 2,400 tokens. Real deployments push this to 3x to leave headroom for non-ideal efficiency. At 64 trains per second, that is roughly 128,000 tokens per second per replica, or about 1/1000 of Gemini’s reported global throughput.

    Mixture of Experts Wants to Live Inside a Rack

    MoE all-to-all routing means every token can be sent to any expert on any GPU. The communication pattern strongly prefers the fast scale-up network (NVLink) inside a rack to the slower scale-out network between racks. Scale-out is roughly 8x slower in bandwidth. This is why one rack ends up being the natural unit for an expert layer, and why Nvidia’s progression from 8 GPUs per rack (Hopper) to 72 (Blackwell) to 500-plus (Rubin) has been such a big deal for model size scaling.

    Reiner walks through the physical constraints: cable density, bend radius, weight, power, cooling. Modern racks are pushing every dimension to the limit. Stuffing more GPUs into the scale-up domain is genuinely a hardware engineering problem.

    Pipeline Parallelism: Why Ilya Said It Is Not Wise

    Pipelining splits model layers across racks. It is the natural way to scale beyond the scale-up domain for very large models. But it has problems. In inference, pipelining does not save runtime, it only saves memory capacity per rack, which already is not the binding constraint because trillion-parameter models only need a terabyte and racks have 10x that. In training, pipelining creates the famous bubble (idle GPU time at the start and end of each pipeline pass) and forces micro-batching, which kills your ability to amortize weight loads across the global batch.

    There is also an architectural cost. Models like Kimi use cross-layer residual connections where attention attends to layers a few back, and pipelining makes those patterns very hard to implement cleanly. Ilya’s quip “as we now know, pipelining is not wise” captures all of this.

    The Memory Wall Paradox

    Industry analysts report that hyperscalers are spending 50% of CapEx on memory this year, while smartphones and laptops are seeing 30% volume drops because there is not enough HBM and DDR to go around. Yet a Blackwell rack already has tens of terabytes of HBM, far more than a trillion-parameter model needs. The reason is that all that extra capacity goes to KV cache, batch size, and longer context. The bandwidth, not the capacity, is what matters most for weight loading. This also implies that hardware could be designed with less HBM per GPU if you commit to pipelining the weights, which is a real architectural option for a chip startup like MatX.

    Reinforcement Learning and the 100x Over-Training of Frontier Models

    Chinchilla scaling laws say a model with N active parameters should be trained on roughly 20N tokens for compute-optimal training. But frontier labs do not just minimize training cost. They minimize training plus inference cost across the model’s deployment lifetime. With reinforcement learning added to the mix, the cost equation has three terms: pre-training (6 times active params times tokens), RL (somewhere between 2x and 6x times active params times RL tokens, with a 30% efficiency penalty for decode-heavy rollouts), and inference (2 times active params times inference tokens).

    If you assume those three roughly equalize at the optimum (a heuristic that holds for many cost curves), you get a clean conclusion: the data going into pre-training should be roughly equal to the data going into RL, which should be roughly equal to the tokens served at inference. With 100 billion active parameters and roughly 150 trillion training tokens, that is about 75x past Chinchilla optimal. Reiner rounds it to 100x. This is the most concrete first-principles argument for why frontier models are so deeply over-trained, and it implies that as inference traffic grows, models should keep getting smaller and longer-trained.

    Each Model Should Output All of Human Knowledge

    The most jaw-dropping consequence: if you equalize pre-training and inference compute, then the total tokens generated by a model across its deployment lifetime should approximate the size of its training corpus. GPT-5, served to hundreds of millions of users for two months, will collectively output something on the order of 150 trillion tokens. That is roughly the sum of human knowledge in textual form. Each frontier model is, in this sense, a one-shot universal author of a corpus the size of its source material.

    API Prices Leak Architecture

    This is where the lecture gets really fun. Gemini 3.1 charges 50% more for context above 200k tokens. Setting memory time equal to compute time at exactly 200k context and solving for KV cache size gives roughly 1.6 to 2 kilobytes per token, which is plausible for a model with 8 KV heads, dense attention, and head dimension of 128.

    The 5x premium for output (decode) tokens versus input (prefill) tokens is direct evidence that decode is severely memory bandwidth bound and prefill is compute bound. Prefill processes many tokens per weight load, so it amortizes memory cost over the whole sequence. Decode processes one token per weight load, so it pays full memory cost every time.

    Cache hits priced at one tenth of cache misses tell you that storing the KV cache in HBM (or DDR or flash) is much cheaper than recomputing it from scratch. The two cache duration tiers (5 minutes and 1 hour) probably correspond to memory tiers whose drain times match those durations: flash for the 5-minute tier, spinning disk for the 1-hour tier. Yes, spinning disk is in the modern LLM serving stack, despite being decades-old technology.

    Why Context Length Has Plateaued at 200k

    Context lengths shot up from 8k to roughly 200k during the GPT-3 to GPT-4 era and have stayed roughly flat for the past two years. Reiner argues this is the natural balance point where memory bandwidth cost crosses compute cost. Going to a million tokens is expensive. Going to 100 million tokens (which Dario has hinted is needed for true continual learning via in-context learning) is essentially impossible without either a memory technology breakthrough or a much more aggressive sparse attention scheme. Sparse attention helps with a square-root improvement, but it is not unlimited. Going too sparse trades off too much quality.

    Cryptography Meets Neural Nets

    The episode ends with a lovely intellectual detour. Cryptographic protocols and transformer architectures both rely on jumbling information across all inputs. They are doing inverse versions of the same operation: ciphers take structured input and produce randomness, while neural nets take noisy input and extract structure. Both fields use differentiation as their primary attack vector (differential cryptanalysis on ciphers, gradient descent on neural nets). Adversarial attacks on image classifiers exploit exactly the avalanche property that good ciphers are designed for.

    The most concrete crossover: Feistel ciphers, which let you build invertible functions out of non-invertible ones, were ported into deep learning as RevNets (reversible networks) in 2017. RevNets let you run the entire network backwards during the backward pass, eliminating the need to store activations and dramatically reducing training memory footprint. It is the opposite trade-off of KV caching: spending compute to save memory rather than spending memory to save compute.

    Thoughts

    The most striking thing about this episode is how much can be deduced from a few equations and the public API price sheets of the major labs. The labs treat their architectures as trade secrets, but the moment they price tokens to be close to cost (which competition forces them to do), the prices themselves leak the underlying ratios. Anyone with a pen and paper can reverse engineer the KV cache size, the memory tier hierarchy, and the compute-vs-memory bottleneck profile of a frontier model. There is a lesson here for builders: in competitive markets, the prices tell you almost everything.

    The 100x over-training result has interesting implications for what comes next. If the optimal balance shifts further toward inference (as adoption keeps growing), models should get smaller and longer-trained. That is good news for serving costs and bad news for training-compute-as-moat. The biggest determinant of model quality might increasingly be data quality and RL environment design, not raw pre-training compute. This squares with what is visible publicly: the leading labs are investing heavily in RL infrastructure, evaluations, and synthetic data pipelines.

    The memory wall is the most underrated infrastructure story in AI. Most people think of compute as the bottleneck, but Reiner makes it clear that memory bandwidth is what actually limits context length, which limits how agentic a model can be in practice. If you cannot get to 100 million token contexts, you probably cannot have an AI agent that has been working with you for a month and remembers everything. Either some sparse attention scheme has to give us cheap effective context length, or we need a memory hardware breakthrough, or we have to invent some form of continual learning that does not rely on context windows. None of those paths are obviously easy, and the fact that context length has been flat for two years despite enormous investment suggests we are stuck against a real wall.

    The cryptography parallel is the kind of cross-disciplinary insight that does not show up enough in AI discourse. Treating neural networks as a kind of differentiable cipher reframes a lot of the architecture choices (residual connections, layer normalization, attention) as deliberate efforts to make the function smooth and invertible enough to learn, in contrast to ciphers, which are deliberately designed to resist exactly that. Adversarial robustness research probably has a lot more to learn from cryptanalysis than it currently does.

    Finally, the format itself is a win. Most AI podcasts are conversational, which is great for personality but bad for technical depth. A blackboard lecture with an interlocutor who asks naive questions at the right moments is a much higher bandwidth medium. More of this, please.

  • Andrej Karpathy on Vibe Coding vs Agentic Engineering: Why He Feels More Behind Than Ever in 2026

    Andrej Karpathy, co-founder of OpenAI, former head of AI at Tesla, and now founder of Eureka Labs, returned to Sequoia Capital’s AI Ascent 2026 stage for a wide-ranging conversation with partner Stephanie Zhan. One year after coining the term “vibe coding,” Karpathy unpacked what has changed, why he has never felt more behind as a programmer, and why the discipline emerging on top of vibe coding, which he calls agentic engineering, is the more serious craft worth learning right now.

    The conversation covered Software 3.0, the limits of verifiability, why LLMs are better understood as ghosts than animals, and why you can outsource your thinking but never your understanding. Below is a complete breakdown of the talk for anyone building, hiring, or learning in the agent era.

    TLDW

    Karpathy describes a sharp transition that happened in December 2025, when agentic coding tools crossed a threshold and code chunks just started coming out fine without correction. He frames the current moment as Software 3.0, where prompting an LLM is the new programming, and entire app categories are collapsing into a single model call. He distinguishes vibe coding (raising the floor for everyone) from agentic engineering (preserving the professional quality bar at much higher speed). Models remain jagged because they are trained on what labs choose to verify, so founders should look for valuable but neglected verifiable domains. Taste, judgment, oversight, and understanding remain uniquely human responsibilities, and tools that enhance understanding are the ones he is most excited about.

    Key Takeaways

    • December 2025 was a clear inflection point. Code chunks from agentic tools started arriving correct without edits, and Karpathy stopped correcting the system entirely.
    • Software 3.0 means programming has become prompting. The context window is your lever over the LLM interpreter, which performs computation in digital information space.
    • Open Code’s installer is a software 3.0 example. Instead of a complex shell script, you copy paste a block of text to your agent, and the agent figures out your environment.
    • The Menu Gen anecdote illustrates how entire apps can become spurious. What used to require OCR, image generation, and a hosted Vercell app can now be a single Gemini plus Nano Banana prompt.
    • Vibe coding raises the floor. Agentic engineering preserves the professional ceiling. The two are different disciplines.
    • The 10x engineer multiplier is now far higher than 10x for people who are good at agentic engineering.
    • Hiring processes have not caught up. Puzzle interviews are the old paradigm. New evaluations should look like building a full Twitter clone for agents and surviving simulated red team attacks from other agents.
    • Models are jagged because reinforcement learning rewards what is verifiable, and labs choose which verifiable domains to invest in. Strawberry letter counts and the 50 meter car wash question show how state-of-the-art models can refactor 100,000 line codebases yet fail at trivial reasoning.
    • If you are in a verifiable setting, you can run your own fine tuning, build RL environments, and benefit even when the labs are not focused on your domain.
    • LLMs are ghosts, not animals. They are statistical simulations summoned from pre training and shaped by RL appendages, not creatures with curiosity or motivation. Yelling at them does not help.
    • Taste, aesthetics, spec design, and oversight remain human jobs. Models still produce bloated, copy paste heavy code with brittle abstractions.
    • Documentation is still written for humans. Agent native infrastructure, where docs are explicitly designed to be copy pasted into an agent, is a major opportunity.
    • The future likely involves agent representation for people and organizations, with agents talking to other agents to coordinate meetings and tasks.
    • You can outsource your thinking but not your understanding. Tools that help humans understand information faster are uniquely valuable.

    Detailed Summary

    Why Karpathy Feels More Behind Than Ever

    Karpathy opens by describing how he has been using agentic coding tools for over a year. For most of that period, the experience was mixed. The tools could write chunks of code, but they often required edits and supervision. December 2025 changed everything. With more time during a holiday break and the release of newer models, Karpathy noticed that the chunks just came out fine. He kept asking for more. He cannot remember the last time he had to correct the agent. He started trusting the system, and what followed was a cascade of side projects.

    He wants to stress that anyone whose model of AI was formed by ChatGPT in early 2025 needs to look again. The agentic coherent workflow that genuinely works is a fundamentally different experience, and the transition was stark.

    Software 3.0 Explained

    The Software 1.0 paradigm was writing explicit code. Software 2.0 was programming by curating datasets and training neural networks. Software 3.0 is programming by prompting. When you train a GPT class model on a sufficiently large set of tasks, the model implicitly learns to multitask everything in the data. The result is a programmable computer where the context window is your interface, and the LLM is the interpreter performing computation in digital information space.

    Karpathy gives two concrete examples. The first is Open Code’s installer. Normally a shell script handles installation across many platforms, and these scripts balloon in complexity. Open Code instead provides a block of text you copy paste to your agent. The agent reads your environment, follows instructions, debugs in a loop, and gets things working. You no longer specify every detail. The agent supplies its own intelligence.

    The Menu Gen Story

    The second example is Karpathy’s Menu Gen project. He built an app that takes a photo of a restaurant menu, OCRs the items, generates pictures for each dish, and renders the enhanced menu. The app runs on Vercell and chains together multiple services. Then he saw a software 3.0 alternative. You take a photo, give it to Gemini, and ask it to use Nano Banana to overlay generated images onto the menu. The model returns a single image with everything rendered. The entire app he built is now spurious. The neural network does the work. The prompt is the photo. The output is the photo. There is no app between them.

    Karpathy uses this to argue that founders should not just think of AI as a speedup of existing patterns. Entirely new things become possible. His example is LLM driven knowledge bases that compile a wiki for an organization from raw documents. That is not a faster version of older code. It is a new capability with no prior equivalent.

    What Will Look Obvious in Hindsight

    Stephanie Zhan asks what the equivalent of building websites in the 1990s or mobile apps in the 2010s looks like today. Karpathy speculates about completely neural computers. Imagine a device that takes raw video and audio as input, runs a neural net as the host process, and uses diffusion to render a unique UI for each moment. He notes that early computing in the 1950s and 60s was undecided between calculator like and neural net like architectures. We went down the calculator path. He thinks the relationship may eventually flip, with neural networks becoming the host and CPUs becoming co processors used for deterministic appendages.

    Verifiability and Jagged Intelligence

    Karpathy spent significant writing time on verifiability. Classical computers automate what you can specify in code. The current generation of LLMs automates what you can verify. Frontier labs train models inside giant reinforcement learning environments, so the models peak in capability where verification rewards are strong, especially math and code. They stagnate or get rough around the edges elsewhere.

    This explains the jagged intelligence puzzle. The classic example was counting letters in strawberry. The newer one Karpathy offers: a state of the art model will refactor a 100,000 line codebase or find zero day vulnerabilities, then tell you to walk to a car wash 50 meters away because it is so close. The two coexisting capabilities should be jarring. They reveal that you must stay in the loop, treat models as tools, and understand which RL circuits your task lands in.

    He also points out that data distribution choices matter. The jump in chess capability from GPT 3.5 to GPT 4 came largely because someone at OpenAI added a huge amount of chess data to pre training. Whatever ends up in the mix gets disproportionately good. You are at the mercy of what labs prioritize, and you have to explore the model the labs hand you because there is no manual.

    Founder Advice in a Lab Dominated World

    Asked what founders should do given that labs are racing toward escape velocity in obvious verifiable domains, Karpathy points back to verifiability itself. If your domain is verifiable but currently neglected, you can build RL environments and run your own fine tuning. The technology works. Pull the lever with diverse RL environments and a fine tuning framework, and you get something useful. He hints there is one specific domain he finds undervalued but declines to name it on stage.

    On the question of what is automatable only from a distance, Karpathy says almost everything can ultimately be made verifiable. Even writing can be assessed by councils of LLM judges. The differences are in difficulty, not in possibility.

    From Vibe Coding to Agentic Engineering

    Vibe coding raises the floor. Anyone can build something. Agentic engineering preserves the professional quality bar that existed before. You are still responsible for your software. You are still not allowed to ship vulnerabilities. The question is how you go faster without sacrificing standards. Karpathy calls it an engineering discipline because coordinating spiky, stochastic agents to maintain quality at speed requires real skill.

    The ceiling on agentic engineering capability is very high. The old idea of a 10x engineer is now an understatement. People who are good at this peak far above 10x.

    What Mediocre Versus AI Native Looks Like

    Karpathy compares this to how different generations use ChatGPT. The difference between a mediocre and an AI native engineer using Claude Code, Codex, or Open Code is investment in setup and full use of available features. The same way previous generations of engineers got the most out of Vim or VSCode, today’s strong engineers tune their agentic environments deeply.

    He thinks hiring processes have not caught up. Most companies still hand out puzzles. The new test should look like asking a candidate to build a full Twitter clone for agents, make it secure, simulate user activity with agents, and then run multiple Codex 5.4x high instances trying to break it. The candidate’s system should hold up.

    What Humans Still Own

    Agents are intern level entities right now. Humans are responsible for aesthetics, judgment, taste, and oversight. Karpathy describes a Menu Gen bug where the agent tried to associate Stripe purchases with Google accounts using email addresses as the key, instead of a persistent user ID. Email addresses can differ between Stripe and Google accounts. This kind of specification level mistake is exactly what humans must catch.

    He works with agents to design detailed specs and treats those as documentation. The agent fills in the implementation. He has stopped memorizing API details for things like NumPy axis arguments or PyTorch reshape versus permute. The intern handles recall. Humans handle architecture, design, and the right questions.

    Reading the actual code agents produce can still cause heart attacks. It is bloated, full of copy paste, riddled with awkward and brittle abstractions. His Micro GPT project, an attempt to simplify LLM training to its bare essence, was nearly impossible to drive through agents. The models hate simplification. That capability sits outside their RL circuits. Nothing is fundamentally preventing this from improving. The labs simply have not invested.

    Animals Versus Ghosts

    Karpathy returns to his framing that we are not building animals, we are summoning ghosts. Animal intelligence comes from evolution and is shaped by intrinsic motivation, fun, curiosity, and empowerment. LLMs are statistical simulation circuits where pre training is the substrate and RL is bolted on as appendages. They are jagged. They do not respond to being yelled at. They have no real curiosity. The ghost framing is partly philosophical, but it changes how you approach them. You stay suspicious. You explore. You do not assume the system you used yesterday will behave the same on a new task.

    Agent Native Infrastructure

    Most software, frameworks, libraries, and documentation are still written for humans. Karpathy’s pet peeve is being told to do something instead of being given a block of text to copy paste to his agent. He wants agent first infrastructure. The Menu Gen project’s hardest part was not writing code. It was deploying on Vercell, configuring DNS, navigating service settings, and stringing together integrations. He wants to give a single prompt and have the entire thing deployed without touching anything.

    Long term he expects agent representation for individuals and organizations. His agent will negotiate meeting details with your agent. The world becomes one of sensors, actuators, and agent native data structures legible to LLMs.

    Education and What Still Matters

    The most striking line of the conversation comes near the end. Karpathy quotes a tweet that shaped his thinking: you can outsource your thinking but you cannot outsource your understanding. Information still has to make it into your brain. You still need to know what you are building and why. You cannot direct agents well if you do not understand the system.

    This is part of why he is so excited about LLM driven knowledge bases. Every time he reads an article, his personal wiki absorbs it, and he can query it from new angles. Every projection onto the same information yields new insight. Tools that enhance human understanding are uniquely valuable because LLMs do not excel at understanding. That bottleneck is yours to manage.

    Thoughts

    The most useful frame in this talk is the distinction between vibe coding and agentic engineering. It clarifies what has been muddled for the past year. Vibe coding is about access. Anyone can produce something. Agentic engineering is about discipline. You preserve the standards that made software trustworthy in the first place, while moving at speeds that would have seemed absurd two years ago. These are not the same activity, and conflating them is part of why so many shipped products feel half built.

    The Menu Gen anecdote is the kind of story that should make every solo developer pause. If a single Gemini plus Nano Banana prompt can replace a multi service Vercell deployed app, the question for any builder becomes how much of what you are working on right now is going to be made spurious by the next model release. The honest answer is probably more than you want to admit. The defensive posture is not building thicker apps. It is choosing problems where the model alone is not enough, where taste, distribution, infrastructure, or specific verifiable RL environments give you something the next model cannot collapse into a prompt.

    The verifiability lens is also unusually practical. If you are a solo builder, the question shifts from what is possible to what is verifiable but neglected. The labs will eat the obvious verifiable domains because that is how their RL pipelines are set up. The opportunity is in domains where verification is possible but the labs have not yet invested. That is a much more concrete strategic filter than vague intuitions about defensibility.

    The car wash example is going to stick. State of the art models can refactor enormous codebases and still tell you to walk somewhere a sane person would drive. That is the lived reality of jagged intelligence, and it argues strongly for staying in the loop on real decisions rather than handing off everything to agents. The agents are excellent fillers of blanks. They are not yet trustworthy specifiers of the spec.

    Finally, the line about outsourcing thinking but not understanding is worth taping above the desk. The bottleneck is no longer typing speed, syntax recall, or even API knowledge. It is whether the human in the loop actually understands the system being built. Tools that genuinely improve human understanding, including personal knowledge bases that re project information through different prompts, are likely the most undervalued category of products being built right now. The opportunity is not just in agents. It is in the cognitive scaffolding that makes humans good directors of agents.

  • Paul Tudor Jones on Macro Trading, Bitcoin, the AI Existential Threat, and Why the US Stock Market Is the Most Leveraged in History

    Legendary macro trader Paul Tudor Jones sat down with Patrick O’Shaughnessy on Invest Like the Best for a sweeping conversation that spans 50 years of trading, the 1980 silver collapse, the 1987 crash, his evolving admiration for Warren Buffett, his alarming view of AI safety, and a daily routine that starts at 2:30 AM. This is one of the most candid and useful conversations a working trader, investor, or builder can listen to right now.

    TLDW (Too Long, Didn’t Watch)

    Paul Tudor Jones believes the United States is sitting on the most leveraged equity market in history at 252% of GDP, dwarfing 1929 and 2000. He sees a sovereign debt bubble, a coming wave of IPO supply that could reverse a decade of buyback driven gains, and a dollar yen trade setting up as the next big macro opportunity. He calls Bitcoin the best inflation hedge that exists thanks to its finite supply, but flags real cyber and quantum tail risks. He apologizes publicly to Warren Buffett for years of doubting him and calls him the OG of compound interest. He thinks AI is being deployed without any meaningful safety regulation, that watermarking AI content should be mandated by law, and that humanity is sleepwalking into a tail risk that could cost hundreds of millions of lives. And he closes with a simple life formula: God, family, friends, fun, and service, with a daily intentional act of kindness as the secret to a meaningful life.

    Key Takeaways

    • The US equity market is at 252% of GDP, the highest in history. For context, 1929 peaked at 65%, 1987 around 85 to 90%, and 2000 around 170%. A standard mean reversion to long term PEs would be a 30 to 35% decline, which on this base would shave 80 to 90% of GDP in market cap.
    • We are in a sovereign debt bubble, not necessarily an equity bubble. But the country is over equitized, individual equity weightings are at all time highs, and private equity has more than doubled as a share of institutional portfolios since 2008.
    • IPO supply is about to flip the buyback math. Buybacks have been retiring roughly 2% of market cap per year for a decade. Contemplated IPOs in the next year could equal 5 to 6% of market cap, reversing a structural tailwind.
    • Hyperscaler capex will eat into tech cash flow, which is part of why tech has been dogging it and may continue to.
    • The buy and hold S&P 500 advice is dangerous at current valuations. Historically, buying the S&P at a PE of 22 has produced negative 10 year returns. Valuation matters even on long horizons.
    • Dollar yen is his current setup. The yen has been grossly undervalued for 24 months. Japan is the largest net international investment creditor, holding roughly $4.5 trillion mostly unhedged in dollars. The catalyst is a new Reagan or Thatcher style prime minister who Paul thinks will trigger a sharp yen rally.
    • Bitcoin is the best inflation hedge in existence because it is finite and decentralized, more scarce than gold. The two real risks are kinetic conflict triggering cyber warfare and the eventual arrival of quantum computing.
    • Every major crash he has lived through had the same DNA: leverage, usually derivative driven. 1987 was 100% portfolio insurance. 1998 was Long Term Capital and derivatives. 2000 was an IPO supply unlock cascade. Today combines all three risks with sovereign debt fragility on top.
    • Trading is boxing, not chess. Most days you are jabbing and feeling out the market. A few times per cycle there is a real opening. Bitcoin in 2020 was a knockout. Two year rates in 2022 was a knockout. The job is to be ready when the opening appears.
    • Great traders are 70% born, not made. Paul polled his top risk takers and the consensus was nature dominates nurture. The traits: type A, hyper curious, loves competition, loves games, intuitive grasp of probability.
    • Liquidity is everything. His grandfather told him as a kid, “you are only worth what you can write a check for tomorrow.” He watched Bunker Hunt go from richest man on earth to virtually bankrupt in six weeks during the 1980 silver collapse. The lesson stuck.
    • Warren Buffett apology. Paul publicly recants decades of skepticism, calling Buffett a flipping genius who understood compound interest at age nine and the OG of compounding.
    • AI safety is a five alarm fire. Paul attended a small conference with modelers from the four biggest model labs. The consensus answer to how AI safety gets resolved was, paraphrasing, when 50 to 100 million people die in an accident. He thinks this is insane.
    • Mandatory AI watermarking should be a campaign issue. He wants knowing violations made a felony after three offenses. He says deepfakes have already fooled serious people he knows twice this year.
    • The build, break, iterate model is fine for most technology and catastrophic for AI because the break in this case can be civilization scale. The Atomic Energy Commission was created 18 months after the bomb. We are three years into deployed AI with effectively zero regulation.
    • Daily routine for 50 years: wake at 6:15, work an hour, 45 minutes of hard cardio, screens for the open, meetings 10 to 12, lunch meeting, hour before close and hour after to plan the next day, walk with wife at 5, work, dinner, mindless TV, work 9:30 to 10:15, sleep, wake at 2:30 or 3 AM to watch the London open and do analytical work, then back to sleep until 6:15.
    • Information overload is now the bottleneck. He works harder today than 40 years ago because the volume of inputs has exploded. The challenge is preserving what he calls exquisite execution: buying when there is blood on the ground and selling at maximum elation.
    • Eli Tullis was his trading mentor. Tullis traded almost only cotton and was a master of executing at the maximum apogee of fear and greed. The biggest lesson came after a catastrophic loss when Tullis greeted his wife’s friends with a smile and total composure. When the going gets tough, the tough get going.
    • Robin Hood Foundation was born from a wrong call. Paul was convinced 1987 would trigger a depression. It did not. But the conviction launched what became one of the most influential anti poverty organizations in America.
    • Journalism 101 should be required at every college. Newspaper inverted pyramid writing taught him principal component analysis: lead with the most important fact, then the next, then the next. He says it is exactly how he ranks variables in a trade.
    • If you do not use it, you lose it. A Palm Beach doctor told him “you retire, you die” and it changed how he thinks about working into his 90s.
    • The principal components of a great life: God, family, friends, fun, service. Significance does not come from the trades. It comes from the people you loved and the people you served.
    • Kill them with kindness. One intentional act of kindness per day, repeated, rewires you. “I should” becomes “I am.” It is the closing message of the entire conversation.

    Detailed Summary

    The Kindest Thing: A Three Year Old Lost in a Vegetable Market

    Paul opens the conversation by insisting they reverse the usual order of the show and start with Patrick’s signature closing question: what is the kindest thing anyone has ever done for you. His earliest childhood memory is being separated from his mother around age two and a half at an outdoor produce market in Memphis in 1957. An elderly Black man took his hand, walked him up and down the aisles, and reunited him with his mother. When she tried to give him five dollars, a meaningful sum at the time, he refused, saying he knew she would do it for his child. That night Paul began adding the unnamed man to his prayer list. He repeated that prayer roughly four to five thousand times over the next twelve years.

    Decades later, watching Harry Reasoner interview Eugene Lang on 60 Minutes, Paul saw the photo negative of his own story: an older man, this time helping kids of color in Harlem, promising to put them through college if they finished high school. Paul called Lang the next day and was redirected to Bedford Stuyvesant, the highest crime neighborhood in New York at the time. He adopted a class, ran after school programs, hired tutors, dealt with kids being murdered and teen pregnancy, and learned by failing what poverty actually requires to defeat. That work seeded the Robin Hood Foundation in 1987 and one of the first charter schools in New York, the Bedford Stuyvesant Charter School of Excellence, which became the number one ranked elementary school out of 543 in NYC within five years.

    Aim High and Shoot Straight

    Paul tells the story of his commencement address at what is now Rhodes College in Memphis. He polled the audience to see who remembered their own commencement speakers. Almost no one did. So he ended his speech by pulling out a bow, knocking an arrow, telling the graduates “whatever you do, aim high and shoot straight,” and shooting an apple off a table. Memorable.

    Trading vs Investing: A 50 Year Career in the Trenches

    Paul started in 1976 when inflation was raging and assets routinely doubled and halved in a single year. He cut his teeth on the floor of the cotton exchange and the COMEX, watching Bunker Hunt accumulate roughly 200 million ounces of silver at an average cost of $3.12 and ride it to roughly $50 an ounce, becoming worth $11 billion at the peak. When the COMEX restricted silver to liquidation only, the price collapsed from $50 to under $10 in eight weeks. Hunt was virtually bankrupt. The searing lesson: never trust permanence in any asset, and always preserve liquidity.

    He contrasts his own life with Warren Buffett’s. Paul’s BBI Fund has run for 40 years with a negative 0.12 correlation to the S&P 500, meaning 100% of returns are alpha. He compares trading to playing right guard in the NFL for 50 years, fighting in the trenches every single day, while Buffett’s belief in America gave him a different kind of strength: the ability to ride out a 50% drawdown in 2008 to 2009 without flinching. After listening to the Acquired podcast on Berkshire Hathaway, Paul realized Buffett understood compound interest at age nine and sought out Benjamin Graham at 17. He calls himself an idiot for ever doubting him.

    The AI Existential Risk Argument

    Paul attended a small conference around 18 months ago with roughly 35 to 40 attendees, including one modeler from each of the four largest AI labs. When he asked them point blank how they expected AI safety to get resolved, the consensus answer was, paraphrasing, that meaningful action would only happen after a mass casualty event of 50 to 100 million people. He has been alarmed ever since.

    His core critique is structural. The build, break, iterate cycle has been the engine of human invention since the beginning. The problem is that AI is the first technology where the tail event of a break could be civilizational. He compares the regulatory response unfavorably to the atomic bomb: the Atomic Energy Commission was stood up 18 months after Hiroshima. We are three years into widely deployed AI with no real regulation, no public referendum, and no convening with adversaries like China.

    His specific policy ask is mandatory watermarking of AI generated content, with knowing violations made a felony after three offenses. He says deepfakes have already deceived people he trusts twice this year and that restoring trust in a basic shared reality is foundational to fixing American discourse. He also notes that a meaningful share of senior AI scientists openly envision a future of brain implanted humans with inalienable rights. He thinks most humans, given a vote, would reject that path. His point is that there has been no vote.

    The Nature of Trading: Boxing, Not Chess

    Trading, Paul says, is more like classic boxing than chess. You are jabbing, feeling out the opponent, looking for an opening. Most days you are gathering information and not doing much. A few times per cycle there is a real opening that you can land hard. He cites Bitcoin in 2020 and two year rates in 2022 as recent knockouts.

    The genesis of every big move, he argues, is one of three things: the market got carried away, an imbalance went on too long, or a central bank or government did something they should not have. Right now he thinks dollar yen fits the pattern: the yen has been grossly undervalued for two years, Japan holds about $4.5 trillion in net international investment positions mostly unhedged in dollars, and the catalyst has arrived in a new prime minister he compares to Reagan, Thatcher, or Trump in his second term.

    Bitcoin as the Best Inflation Hedge

    Paul reiterates Bitcoin as superior to gold as an inflation hedge. Gold supply grows roughly two percent a year. Bitcoin’s supply is capped. Decentralization adds defensibility. The honest caveats: any kinetic global conflict will trigger cyber warfare, and electronic assets sit on the front line. Quantum computing, if and when it arrives, could enable hacks of any bank or any digital store of value. He is not predicting either tomorrow but he is unwilling to ignore them.

    Are We in a Bubble? Look at the Numbers

    The headline statistic is jaw dropping. Stock market capitalization to GDP is currently 252%. The 1929 peak was 65%. The 1987 peak was 85 to 90%. The 2000 peak was 170%. We have never been here before.

    Bear markets since 1970 have mean reverted on roughly a ten year cadence. A reversion to a normalized PE from current levels would imply a 30 to 35% decline. On a 250% of GDP base, that is 80 to 90 points of GDP in evaporated wealth. Capital gains tax revenue would crater, the deficit would explode, and the bond market would suffer a self reinforcing negative feedback loop.

    Add to this the IPO unlock schedule. Contemplated IPOs over the next year may equal 5 to 6% of market cap. For a decade, buybacks have removed roughly 2% per year. The math is about to flip. Hyperscaler capex commitments will further eat into the cash flow that funded the buybacks. Private equity has gone from 7% of institutional portfolios in 2007 to 16% today. Real estate and infrastructure allocations have grown. The system is dramatically more illiquid and more leveraged than it was in 2008.

    Paul’s specific warning to anyone telling clients to just buy the S&P: at a starting PE of 22, history shows negative 10 year returns. Valuation always matters.

    A Day in the Life of PTJ

    The schedule is monastic. Up at 6:15. Work an hour. 45 minutes of hard cardio. At the screens for the open. Meetings from 10 to 12. Lunch meeting. Afternoon meeting. An hour before the close and an hour after to plan tomorrow and think about what is coming overnight in Tokyo and Hong Kong. Home around 5. An hour walking with his wife. Another hour of work. Dinner. Mindless TV. Work again from 9:30 to 10:15. Sleep. Wake at 2:30 or 3 AM to watch the London open for 30 to 45 minutes and do analytical work in the quiet. Back to sleep. Wake at 6:15. Repeat for 40 years.

    He says he works harder now than ever before because of information overload. The opportunity cost of every distraction is exquisite execution: buying when there is blood on the ground, selling at peak euphoria.

    Eli Tullis and Executing at Maximum Pain

    Paul’s mentor Eli Tullis traded almost exclusively cotton. The defining moment came after Tullis was annihilated when a long awaited drought broke and cotton went limit down over a weekend. Paul watched in disbelief as Tullis welcomed his wife’s friends to a beautiful office for lunch with a smile, charm, and zero visible distress. The lesson, branded into Paul: when the going gets tough, the tough get going.

    Are Traders Born or Made

    Paul polled four or five of his best risk takers at a Christmas dinner. The unanimous answer: roughly 70% nature. The traits that recur: type A personality, hyper curiosity, love of competition, obsession with games, intuitive grasp of probability theory. Paul had a degree in probability theory without ever taking a math course on it. He played chess, backgammon, monopoly, gin rummy, gambled in college, and has never stopped playing bridge with friends.

    Why Keep Trading?

    Three reasons. First, his Palm Beach doctor told him retirement equals death. If you do not use it, you lose it. Second, his father lived to 100 and Paul wants to remain mentally sharp through his 90s. Third, and most importantly, he wants to make an absolute pot of money so he can give it away. The pursuit of nobility, as he calls it.

    The Workless World

    Paul used to despair about a future where AI does so much that humans no longer need to work. So much human significance comes from work. He has become more optimistic recently, watching how athletes find significance in sport and how he finds significance in bridge games with friends. Humans, he argues, are absurdly adaptable. We may find significance in something as small as a single intentional act of kindness per day.

    Why Journalism 101 Should Be Required

    Paul’s father ran a tiny trade finance legal paper in Memphis. Paul grew up writing for it and taking journalism classes. He argues that newspaper inverted pyramid writing should be mandatory in every college, more important than business school. Conclusion first. First sentence carries the most important fact. Who, what, where, when, why, how. Each subsequent paragraph drops one notch in importance. This is just principal component analysis applied to communication. It is also exactly how Paul ranks variables in a trade. At any given moment, ten things might matter, but only one is the catalytic variable today. The discipline of the inverted pyramid is the discipline of trading.

    The Principal Components of a Great Life

    Asked to apply the same framework to life, Paul answers without hesitation: God, family, friends, fun, service. He says he has actually thought about his own funeral with anticipation, partly because of the songs he has chosen. At the end, he says, no one thinks about the 1987 crash or Bitcoin. They think about who they loved, who loved them, what kind of relationships they had, and what they did to leave a legacy of betterment for others. Legacy, he insists, means deeds, not words.

    Kill Them With Kindness

    The closing message comes from his mother. Wake up some days you will be in a bad mood. Something on TV will make you angry. The temptation today is to demonize the other side. The antidote is intentional. One simple act of kindness per day, transmitted outward, repeated. Reps matter. “I should” becomes “I am.” Over time you become an organically kind person. Your outlook brightens. Multiply that across a country and the country changes.

    Thoughts

    The 252% market cap to GDP figure is the single most important number in the conversation. Most listeners will gloss over it. They should not. The structural argument Paul lays out is internally consistent and uncomfortably specific: an over equitized country, a sovereign debt bubble, an IPO supply wave that flips a decade of buyback math, hyperscaler capex eating cash flow, private equity more than doubled as a portfolio share since 2008, and far less liquidity than 2008 to absorb a shock. None of these are predictions of an imminent crash. They are descriptions of the kindling.

    His Buffett apology is the kind of intellectual honesty that is rare in finance. Two operators with opposite styles can both be right for fifty years. Paul’s negative correlation to the S&P with 100% alpha and Buffett’s belief in America with patient compounding are not rival theories of investing. They are different jobs. Most retail investors are trying to do Buffett’s job with a trader’s emotional reflexes, which is why so few make it.

    The AI section is the part of the interview that should make builders pause. Paul is not an AI doomer in the online sense. He is a 50 year career risk manager applying the standard framework: what is the size of the tail, what is the regulatory containment, who has the kill switch. His answer is that the tail is potentially civilization scale, the containment is effectively zero, and there is no kill switch. The historical precedent he reaches for is not science fiction but the Atomic Energy Commission stood up 18 months after Hiroshima. The contrast with our current trajectory is uncomfortable.

    The watermarking proposal is unusually concrete for a trader and unusually politically tractable for an AI safety policy. It does not require slowing capability research. It does not require international coordination as a precondition. It restores the basic epistemic substrate of public discourse: knowing what is human and what is not. Whether you think AI risks are overblown or underrated, watermarking is a Pareto improvement.

    For builders shipping software in the AI era, the meta lesson is that we are running the build, break, iterate playbook on a system whose break radius is no longer contained by the founders. That is a different kind of responsibility than the one most engineers have ever held. It does not have a clean answer yet. But the question is now visible.

    The kindness frame at the start and end is not throat clearing. It is the actual operating system Paul has run on for 70 years. The four to five thousand prayer reps for an unnamed man who held his hand in a Memphis vegetable market produced a pattern interrupt 25 years later that founded one of the most effective anti poverty organizations in the country. Compound interest applies to acts as much as to dollars. That is the through line of the entire conversation, and it is the thing most listeners will forget by tomorrow morning. They should not.

  • Claude Opus 4.7 Released: Anthropic’s New Coding Powerhouse With xhigh Effort Mode, 3.75MP Vision, and State-of-the-Art Agentic Performance

    TLDR

    Anthropic released Claude Opus 4.7 on April 16, 2026, as a direct upgrade to Opus 4.6. It delivers major gains on the hardest coding tasks, introduces a new xhigh effort level, supports images up to 2,576 pixels on the long edge (roughly 3.75 megapixels), and ships with automatic cybersecurity safeguards. Pricing stays flat at $5 per million input tokens and $25 per million output tokens. Early testers at Cursor, Replit, Vercel, Notion, Devin, Harvey, Databricks, and Warp report double-digit benchmark jumps, stronger instruction following, better long-horizon autonomy, and a more opinionated model that pushes back instead of agreeing reflexively.

    Key Takeaways

    • Direct upgrade from Opus 4.6 at the same price point, available via API as claude-opus-4-7, plus Amazon Bedrock, Google Cloud Vertex AI, and Microsoft Foundry.
    • New xhigh effort level slots between high and max, giving developers finer control over the reasoning-versus-latency tradeoff.
    • Vision gets a real jump: images up to 2,576 pixels on the long edge, more than 3x prior Claude models. XBOW reported 98.5% visual acuity versus 54.5% for Opus 4.6.
    • Coding benchmarks up across the board: Cursor saw 70% on CursorBench versus 58% for 4.6, Rakuten-SWE-Bench resolved 3x more production tasks, and GitHub measured a 13% lift on their 93-task benchmark.
    • Long-horizon autonomy is a headline theme. Devin says Opus 4.7 works coherently for hours. Genspark highlights loop resistance and the highest quality-per-tool-call ratio they have measured.
    • Instruction following is substantially tighter, which means old prompts written for loose-interpretation models may now behave unexpectedly. Re-tune prompts and harnesses.
    • Better memory across file-system-based workflows, reducing the need for up-front context in multi-session work.
    • Tokenizer changed: same input can now map to 1.0 to 1.35x more tokens. Opus 4.7 also thinks more at higher effort levels, so output token counts rise too.
    • Cybersecurity safeguards automatically detect and block prohibited or high-risk cyber requests. Legitimate security researchers can apply to the new Cyber Verification Program.
    • Claude Code gets /ultrareview, a dedicated review session that catches bugs and design issues. Pro and Max users get three free ultrareviews. Auto mode is extended to Max users.
    • State-of-the-art on GDPval-AA, a third-party evaluation of economically valuable knowledge work spanning finance, legal, and other domains.
    • Not the most capable overall model. That distinction still goes to Claude Mythos Preview, which also remains the best-aligned model Anthropic has trained.

    Detailed Summary

    What Claude Opus 4.7 Actually Is

    Claude Opus 4.7 is Anthropic’s latest generally available frontier model, positioned as a targeted upgrade to Opus 4.6 rather than a ground-up new generation. The focus is squarely on advanced software engineering, long-running agentic workflows, and higher-fidelity vision. Anthropic describes it as handling complex, long-running tasks with rigor and consistency, paying precise attention to instructions, and devising ways to verify its own outputs before reporting back.

    The positioning matters. Claude Mythos Preview, announced alongside Project Glasswing, remains the most powerful and best-aligned model Anthropic has trained. Opus 4.7 is the first release after Mythos Preview and serves a dual purpose: give developers a concrete upgrade today, and stress-test new cybersecurity safeguards on a less capable model before Anthropic attempts a broader release of Mythos-class systems.

    Coding and Agentic Performance

    The early-access testimonials read like a highlight reel of the agentic coding ecosystem. Cursor saw CursorBench scores jump from 58% on Opus 4.6 to over 70% on Opus 4.7. Rakuten measured 3x more resolved production tasks on Rakuten-SWE-Bench with double-digit gains in code quality and test quality. GitHub measured a 13% lift on a 93-task coding benchmark including four tasks that neither Opus 4.6 nor Sonnet 4.6 could solve. Notion observed a 14% improvement over Opus 4.6 at fewer tokens and a third of the tool errors, calling it the first model to pass their implicit-need tests.

    Devin emphasized sustained autonomy, saying the model works coherently for hours and pushes through hard problems rather than giving up. Warp reported that Opus 4.7 passed Terminal Bench tasks prior Claude models had failed, including a tricky concurrency bug Opus 4.6 could not crack. Vercel highlighted a behavior they had not seen before: the model actually does proofs on systems code before starting work, and is noticeably more honest about its own limits.

    A recurring theme across testimonials is that Opus 4.7 pushes back. Replit’s president said it feels like a better coworker because it challenges technical decisions instead of agreeing by default. Augment Code noted it brings a more opinionated perspective rather than simply agreeing with the user. For anyone building real engineering workflows, that pushback behavior is arguably more valuable than raw benchmark deltas.

    Vision: The Quiet Breakthrough

    The vision upgrade may be the most underappreciated change. Opus 4.7 now accepts images up to 2,576 pixels on the long edge, roughly 3.75 megapixels, which is more than three times the previous Claude limit. This is a model-level change, not an API parameter, so every image sent to Claude is processed at higher fidelity automatically.

    XBOW, which builds autonomous penetration testing agents that rely heavily on computer use, reported the most dramatic single number in the entire announcement: 98.5% on their visual acuity benchmark versus 54.5% for Opus 4.6. They described their single biggest Opus pain point as effectively disappearing, unlocking an entire class of work where they could not previously use Claude. Solve Intelligence reported major improvements in multimodal understanding for life sciences patent workflows, from reading chemical structures to interpreting complex technical diagrams.

    This unlocks computer-use agents reading dense screenshots, data extraction from complex diagrams, and any work requiring pixel-perfect references.

    The New xhigh Effort Level

    Opus 4.7 introduces an xhigh (extra high) effort level that sits between high and max. This gives developers a new middle gear for the reasoning-versus-latency tradeoff on hard problems. In Claude Code, Anthropic raised the default effort level to xhigh across all plans. For coding and agentic use cases, Anthropic recommends starting with high or xhigh effort rather than defaulting to medium.

    Alongside effort controls, the Claude Platform is getting task budgets in public beta, letting developers guide Claude’s token spend so it can prioritize work across longer runs. This matters because Opus 4.7 thinks more at higher effort levels, particularly on later turns in agentic settings.

    Token Usage Changes You Need to Plan For

    Two token-related changes affect migration. First, Opus 4.7 uses an updated tokenizer that improves how the model processes text, but the tradeoff is that the same input can map to 1.0 to 1.35x more tokens depending on content type. Second, Opus 4.7 thinks more at higher effort levels, which means more output tokens on hard problems.

    Anthropic’s own internal coding evaluation shows the net effect is favorable when measured against quality delivered per token, but the recommendation is to measure the difference on real traffic rather than assume. Token usage can be controlled via the effort parameter, task budgets, or simply prompting the model to be more concise. Anthropic published a migration guide with tuning advice.

    Claude Code Updates: /ultrareview and Auto Mode

    Claude Code gets two meaningful additions. The new /ultrareview slash command produces a dedicated review session that reads through changes and flags bugs and design issues that a careful reviewer would catch. Pro and Max users get three free ultrareviews to try it out.

    Auto mode, a permissions option where Claude makes decisions on behalf of the user so longer tasks run with fewer interruptions, has been extended from Pro to Max users. The pitch is that auto mode is safer than skipping all permissions while still enabling long autonomous runs.

    Cybersecurity Safeguards and the Cyber Verification Program

    Opus 4.7 ships with safeguards that automatically detect and block requests indicating prohibited or high-risk cybersecurity uses. During training, Anthropic experimented with efforts to differentially reduce cyber capabilities, meaning Opus 4.7’s cyber ceiling is intentionally lower than Mythos Preview’s.

    For legitimate users, Anthropic launched a Cyber Verification Program for security professionals doing vulnerability research, penetration testing, and red-teaming. Real-world data from these safeguards will inform how Anthropic eventually releases Mythos-class models more broadly.

    Safety and Alignment

    Opus 4.7 shows a similar safety profile to Opus 4.6 overall. Honesty and resistance to prompt injection attacks improved. Some measures slipped modestly, notably a tendency to give overly detailed harm-reduction advice on controlled substances. Anthropic’s alignment assessment concluded the model is largely well-aligned and trustworthy, though not fully ideal. Mythos Preview still holds the crown as the best-aligned model according to Anthropic’s evaluations. The full Claude Opus 4.7 System Card has the complete breakdown.

    Real-World Work Beyond Code

    Opus 4.7 posts a state-of-the-art score on the Finance Agent evaluation and on GDPval-AA, a third-party evaluation of economically valuable knowledge work spanning finance, legal, and other domains. Harvey reported 90.9% on BigLaw Bench at high effort with noticeably smarter handling of ambiguous document editing tasks, including correctly distinguishing assignment provisions from change-of-control provisions. Databricks measured 21% fewer errors than Opus 4.6 on OfficeQA Pro document reasoning. Vercel went as far as calling it the best model in the world for building dashboards and data-rich interfaces.

    Pricing and Availability

    Pricing holds at $5 per million input tokens and $25 per million output tokens. Opus 4.7 is live today across all Claude products, the Claude API as claude-opus-4-7, Amazon Bedrock, Google Cloud Vertex AI, and Microsoft Foundry.

    Thoughts

    The most interesting thing about this release is not the benchmark deltas, which are strong but expected for a point-release. It is the behavioral shift. When a dozen independent companies describe the same model as opinionated, willing to push back, self-verifying, and honest about its limits, that is a different product category than “next version, slightly better.” That is a model optimized for being a collaborator rather than an autocomplete.

    For solo builders running long agentic sessions, the loop resistance and long-horizon autonomy claims are the ones worth taking seriously. Genspark’s framing is sharp: a model that loops indefinitely on 1 in 18 queries wastes compute and blocks users. If Opus 4.7 genuinely closes that failure mode, the economics of overnight autonomous runs change meaningfully.

    The vision jump is the sleeper feature. 3.75 megapixel support plus the XBOW acuity number suggests computer-use agents are about to get a lot more reliable at reading actual screens. Anyone building browser agents, automated QA, or visual data extraction pipelines should retest their stacks this week.

    The instruction-following tightening is a real gotcha. Prompts written against Opus 4.6’s looser interpretation habits may produce surprising results when the model now takes every word literally. Teams with production prompt libraries should budget time for re-tuning rather than expecting a drop-in swap.

    Finally, the strategic framing around Mythos Preview is worth noting. Anthropic is explicitly using Opus 4.7 as a safeguards testbed for eventually releasing more capable cyber-capable systems. That is an honest acknowledgment that capability and deployment readiness are separate problems, and it sets a template for how frontier releases may work going forward.

  • Jensen Huang on Nvidia’s Supply Chain Moat, TPU Competition, China Export Controls, and Why Nvidia Will Not Become a Cloud (Dwarkesh Podcast Summary)

    TLDW (Too Long, Didn’t Watch)

    Jensen Huang sat down with Dwarkesh Patel for over 90 minutes covering Nvidia’s supply chain dominance, the TPU threat, why Nvidia will not become a hyperscaler, whether the US should sell AI chips to China, and why Nvidia does not pursue multiple chip architectures at once. Jensen framed Nvidia’s entire business as transforming “electrons into tokens” and argued that Nvidia’s real moat is not any single technology but the full stack ecosystem it has built over two decades. He was blunt about his regret over not investing in Anthropic and OpenAI earlier, passionate about keeping the American tech stack dominant worldwide, and dismissive of the idea that China’s chip industry can be meaningfully contained through export controls.

    Key Takeaways

    1. Nvidia’s moat is the ecosystem, not the chip. Jensen repeatedly emphasized that Nvidia’s competitive advantage comes from CUDA, its massive installed base, its deep partnerships across the entire supply chain, and the fact that it operates in every cloud. The moat is not a single product but an interlocking system that took 20+ years to build.

    2. Supply chain bottlenecks are temporary, energy bottlenecks are not. Jensen argued that CoWoS packaging, HBM memory, EUV capacity, and logic fabrication bottlenecks can all be resolved in two to three years with the right demand signal. The real constraint on AI scaling is energy policy, which takes far longer to fix.

    3. TPUs and ASICs are not an existential threat to Nvidia. Jensen was emphatic that no competitor has demonstrated better price-performance or performance-per-watt than Nvidia, and challenged TPU and Trainium to prove otherwise on public benchmarks like InferenceMAX and MLPerf. He described Anthropic as a “unique instance, not a trend” for TPU adoption.

    4. Jensen regrets not investing in Anthropic and OpenAI earlier. He admitted he did not deeply internalize how much capital AI labs needed and that traditional VC funding was not sufficient for companies at that scale. He described this as a clear miss, though he said Nvidia was not in a position to make multi-billion dollar investments at the time.

    5. Nvidia will not become a hyperscaler. Jensen’s philosophy is “do as much as needed, as little as possible.” Building cloud infrastructure is something other companies can do, so Nvidia supports neoclouds like CoreWeave, Nebius, and Nscale instead of competing with them. Nvidia invests in ecosystem partners rather than vertically integrating into cloud services.

    6. Jensen is strongly against US chip export controls on China. This was the longest and most heated segment of the interview. Jensen argued that China already has abundant compute, energy, and AI researchers, and that export controls have accelerated China’s domestic chip industry while causing the US to concede the world’s second-largest technology market. He compared the situation to how US telecom policy allowed Huawei to dominate global telecommunications.

    7. AI will cause software tool usage to skyrocket, not collapse. Jensen pushed back on the narrative that AI will commoditize software companies. He argued that agents will use existing tools at massive scale, causing the number of instances of products like Excel, Synopsys Design Compiler, and other enterprise tools to grow exponentially.

    8. Nvidia does not pick winners among AI labs. Jensen explained that Nvidia invests across multiple foundation model companies simultaneously and refuses to favor any single one. He cited his own company’s unlikely survival story as the reason for this humility: Nvidia’s original graphics architecture was “precisely wrong” and would have been counted out by anyone picking winners.

    9. Nvidia added Groq for premium token economics. Nvidia recently acquired Groq and is folding it into the CUDA ecosystem because the market is now segmenting into different token tiers. Some customers will pay premium prices for faster response times even at lower throughput, creating a new segment of the inference market.

    10. Without AI, Nvidia would still be very large. Jensen was clear that accelerated computing, not AI specifically, is the foundational mission of the company. Molecular dynamics, quantum chemistry, computational lithography, data processing, and physics simulation all benefit from GPU acceleration regardless of deep learning.

    Detailed Summary

    Nvidia’s Real Business: Electrons to Tokens

    Jensen opened the conversation by reframing Nvidia’s entire value proposition. When Dwarkesh suggested that Nvidia is fundamentally a software company that sends a GDS2 file to TSMC for manufacturing, Jensen pushed back hard. He described Nvidia’s job as transforming electrons into tokens, with everything in between representing an “incredible journey” of artistry, engineering, science, and invention. He said the transformation is far from deeply understood and the journey is far from over, making commoditization unlikely.

    Jensen described Nvidia as operating a philosophy of doing “as much as necessary and as little as possible.” Whatever Nvidia does not need to do itself, it partners with someone else and makes it part of the broader ecosystem. This is why Nvidia has what Jensen called probably the largest ecosystem of partners in the industry, spanning the full supply chain upstream and downstream, application developers, model makers, and all five layers of the AI stack.

    On the question of whether AI will commoditize software companies, Jensen offered a contrarian take. He argued that agents are going to use software tools at unprecedented scale, meaning the number of instances of products like Excel, Cadence design tools, and Synopsys compilers will skyrocket. Today the bottleneck is the number of human engineers. Tomorrow, those engineers will be supported by swarms of agents exploring design spaces and using the same tools humans use today. Jensen said the reason this has not happened yet is simply that the agents are not good enough at using tools. That will change.

    The Supply Chain Moat

    Dwarkesh pressed Jensen on Nvidia’s reported $100 billion (and potentially $250 billion) in purchase commitments with foundries, memory manufacturers, and packaging companies. The question was whether Nvidia’s real moat for the next few years is simply locking up scarce upstream components so that no competitor can get the memory and logic they need to build alternative accelerators.

    Jensen confirmed this is a significant advantage but framed it differently. He said Nvidia has made enormous explicit and implicit commitments upstream. The implicit commitments matter just as much: Jensen personally meets with CEOs across the supply chain to explain the scale of the coming AI industry, convince them to invest in capacity, and assure them that Nvidia’s downstream demand is large enough to justify that investment. Nvidia’s GTC conference serves this purpose too, bringing the entire ecosystem together so upstream suppliers can see downstream demand and vice versa.

    Jensen described a process of systematically “prefetching bottlenecks” years in advance. CoWoS advanced packaging was a major bottleneck two years ago, but Nvidia swarmed it with repeated doubling of capacity until TSMC recognized it as mainstream computing technology rather than a specialty product. More recently, Nvidia has invested in the silicon photonics ecosystem through partnerships with Lumentum and Coherent, invented new packaging technologies, licensed patents to keep the supply chain open, and even invested in new testing equipment like double-sided probing.

    When Dwarkesh asked about the ultimate physical bottlenecks, Jensen surprised him. The hardest bottleneck to solve is not CoWoS or HBM or EUV machines. It is plumbers and electricians needed to build data centers. Jensen used this as a launching point to criticize “doomers” who discourage people from pursuing careers in software engineering or radiology, arguing that scaring people out of these professions creates the real bottlenecks.

    On EUV and logic scaling specifically, Jensen was optimistic. He said no supply chain bottleneck lasts longer than two to three years. Once you can build one of something, you can build ten, and once you can build ten, you can build a million. The key is a clear demand signal. If TSMC is convinced of the demand, ASML will produce enough EUV machines. Meanwhile, Nvidia continues to improve computing efficiency by 10x to 50x per generation through architecture, algorithms, and system design.

    The TPU Question

    Dwarkesh pushed hard on whether Google’s TPUs represent a real threat, noting that two of the top three AI models (Claude and Gemini) were trained on TPUs. Jensen drew a sharp distinction between what Nvidia builds and what a TPU is. Nvidia builds accelerated computing, which serves molecular dynamics, quantum chromodynamics, data processing, fluid dynamics, particle physics, and AI. A TPU is a tensor processing unit optimized for matrix multiplies. Nvidia’s market reach is far greater than any TPU or ASIC can possibly have.

    Jensen emphasized programmability as Nvidia’s core architectural advantage. If you want to invent a new attention mechanism, build a hybrid SSM model, fuse diffusion and autoregressive techniques, or disaggregate computation in a novel way, you need a generally programmable architecture. The only way to achieve 10x or 100x performance leaps (versus the roughly 25% per year from Moore’s Law) is to fundamentally change the algorithm, and that requires the flexibility CUDA provides.

    On the specific question of whether hyperscalers with huge engineering teams can simply write their own kernels and bypass CUDA, Jensen acknowledged they do write custom kernels but argued that Nvidia’s engineers still routinely deliver 2x to 3x speedups when they optimize a partner’s stack. He described Nvidia’s GPUs as “F1 racers” that anyone can drive at 100 mph, but extracting peak performance requires deep architectural expertise. Nvidia uses AI itself to generate many of its optimized kernels.

    Jensen was particularly blunt about public benchmarks. He pointed to Dylan Patel’s InferenceMAX benchmark and said neither TPU nor Trainium has been willing to demonstrate their claimed performance advantages on it. He said Nvidia’s performance-per-TCO is the best in the world, “bar none,” and challenged anyone to prove otherwise.

    Regarding Anthropic’s multi-gigawatt deal with Broadcom and Google for TPUs, Jensen called it “a unique instance, not a trend.” He said without Anthropic, there would be essentially no TPU growth and no Trainium growth. He traced this back to his own mistake: when Anthropic and OpenAI needed multi-billion dollar investments from their compute suppliers to get off the ground, Nvidia was not in a position to provide that capital. Google and AWS were, and in return, Anthropic committed to using their compute.

    Nvidia’s Investment Strategy and Regrets

    Jensen was unusually candid about his regret over not investing in foundation model companies earlier. He said he did not deeply internalize how different AI labs were from typical startups. A traditional VC would never put $5 to $10 billion into a single AI lab, but that was exactly what companies like OpenAI and Anthropic needed. By the time Jensen understood this, Nvidia was not in a financial or cultural position to make those kinds of investments.

    Now, Nvidia has invested approximately $30 billion in OpenAI and $10 billion in Anthropic. Jensen said he is delighted to support both and considers their existence essential for the world. But he acknowledged that these investments came at much higher valuations than would have been possible years earlier.

    Jensen explained Nvidia’s broader investment philosophy: support everyone, do not pick winners. He invests in one foundation model company, he invests in all of them. This comes from hard-won humility. When Nvidia started, there were 60 3D graphics companies. Nvidia’s original architecture was “precisely wrong” and the company would have been at the top of most lists to fail. Jensen said he has enough humility from that experience to know that you cannot predict which AI company will ultimately succeed.

    Why Nvidia Will Not Become a Hyperscaler

    Dwarkesh pointed out that Nvidia has the cash to build and operate its own cloud infrastructure, bypassing the middleman ecosystem that converts CapEx into OpEx for AI labs. Jensen rejected this path based on his core operating philosophy.

    If Nvidia did not build its computing platform, NVLink, and the CUDA ecosystem, nobody else would have done it. He is “completely certain” of that. These are things Nvidia must do. But the world has lots of clouds. If Nvidia did not build a cloud, someone else would show up. So the answer is to support the ecosystem instead: invest in CoreWeave, Nscale, Nebius, and others to help them exist and scale, rather than competing with them.

    Jensen was clear that Nvidia is not trying to be in the financing business either. When OpenAI needed a $30 billion investment before its IPO, Nvidia stepped up because OpenAI needed it and Nvidia deeply believed in the company. But these are targeted ecosystem investments, not a strategic pivot into cloud services.

    On GPU allocation during shortages, Jensen pushed back on the narrative that Nvidia strategically “fractures” the market by giving allocations to smaller neoclouds. He said the process is straightforward: you forecast demand, you place a purchase order, and it is first in, first out. Nvidia never changes prices based on demand. Jensen said he prefers to be dependable and serve as the foundation of the industry rather than extracting maximum short-term value.

    The China Debate

    The longest and most heated section of the interview was Jensen’s case against US chip export controls on China. This was a genuine debate, with Dwarkesh pushing the national security argument and Jensen pushing back forcefully.

    Jensen’s core argument rested on several pillars. First, China already has abundant compute. They manufacture 60% or more of the world’s mainstream chips, have massive energy infrastructure (including empty data centers with full power), and employ roughly 50% of the world’s AI researchers. The threshold of compute needed to build models like Anthropic’s Mythos has already been reached and exceeded by China’s existing infrastructure.

    Second, export controls have backfired. They accelerated China’s domestic chip industry, forced their AI ecosystem to optimize for internal architectures instead of the American tech stack, and caused the United States to concede the second-largest technology market in the world. Jensen compared this directly to how US telecom policy allowed Huawei to dominate global telecommunications infrastructure.

    Third, Jensen argued that AI is a five-layer stack (energy, chips, computing platform, models, applications) and the US needs to win at every layer. Fixating on one layer (models) at the expense of another layer (chips) is counterproductive. If Chinese open source AI models end up optimized for non-American hardware and that stack gets exported to the global south, the Middle East, Africa, and Southeast Asia, the US will have lost something far more valuable than whatever marginal compute advantage the export controls provided.

    Dwarkesh countered with the Mythos example: Anthropic’s new model found thousands of high-severity zero-day vulnerabilities across every major operating system and browser, including one that had existed in OpenBSD for 27 years. If China had enough compute to train and deploy a model like Mythos at scale before the US could prepare, the cyber-offensive capabilities would be devastating.

    Jensen’s response was direct. Mythos was trained on “fairly mundane capacity” that is already abundantly available in China. The amount of compute is not the bottleneck for that kind of breakthrough. Great computer science is, and China has no shortage of brilliant AI researchers. He pointed to DeepSeek as evidence: most advances in AI come from algorithmic innovation, not raw hardware. If China’s researchers can achieve breakthroughs like DeepSeek with limited hardware, imagine what they could do with more.

    Jensen also argued for dialogue over confrontation. He said it is essential that American and Chinese AI researchers are talking to each other, and that both countries agree on what AI should not be used for. The idea that you can prevent AI risks by cutting off chip sales, when the real advances come from algorithms and computer science, reflects a fundamental misunderstanding of how AI progress works.

    The debate ended without resolution, but Jensen’s final point was sharp: “I’m not talking to somebody who woke up a loser. That loser attitude, that loser premise, makes no sense to me.”

    Why Not Multiple Chip Architectures?

    Near the end of the interview, Dwarkesh asked why Nvidia does not run multiple parallel chip projects with different architectures, like a Cerebras-style wafer-scale design or a Dojo-style huge package, or even one without CUDA.

    Jensen’s answer was simple: “We don’t have a better idea.” Nvidia simulates all of these alternative approaches in its internal simulators and they are provably worse. The company works on exactly the projects it wants to work on. If the workload were to change dramatically (not just the algorithms, but the actual market shape), Nvidia might add other accelerators.

    In fact, Nvidia recently did exactly this by acquiring Groq. The inference market is now segmenting into different tiers. Some customers will pay premium prices for extremely fast response times even if throughput is lower. This creates a new “high ASP token” segment that justifies a different point on the performance curve. But Jensen was clear: if he had more money, he would put it all behind Nvidia’s existing architecture, not diversify into alternatives.

    Nvidia Without AI

    Jensen closed by saying that even if the deep learning revolution had never happened, Nvidia would be “very, very large.” The premise of the company has always been that general-purpose computing cannot scale indefinitely and that domain-specific acceleration is the way forward. Molecular dynamics, seismic processing, image processing, computational lithography, quantum chemistry, and data processing all benefit from GPU acceleration regardless of AI. Jensen said the fundamental promise of accelerated computing has not changed “not even a little bit.”

    Thoughts

    This interview is one of the most revealing Jensen Huang conversations in years, partly because Dwarkesh actually pushes back instead of lobbing softballs. A few things stand out.

    The Anthropic regret is real and significant. Jensen is essentially admitting that Nvidia’s biggest strategic miss of the AI era was not understanding that foundation model companies needed supplier-level capital commitments, not VC funding. The fact that Google and AWS used compute investments to lock in Anthropic’s architecture choices has had downstream consequences that Nvidia is still working to unwind. When Jensen says Anthropic is “a unique instance, not a trend” for TPU adoption, he is simultaneously downplaying the threat and revealing exactly how seriously he takes it.

    The China debate is the highlight. Jensen’s argument is more nuanced than it first appears. He is not saying “sell China everything.” He is saying the current binary approach of near-total restriction has backfired by accelerating China’s domestic chip industry and pushing the Chinese AI ecosystem away from the American tech stack. His comparison to the US telecom industry losing global market share to Huawei is pointed and historically grounded. Whether you agree with his conclusion or not, the framing of AI as a five-layer stack where the US needs to compete at every layer is a useful mental model.

    The “electrons to tokens” framing is Jensen at his best. It is a simple metaphor that captures something genuinely complex about where value is created in the AI supply chain. And his insistence that the transformation is “far from deeply understood” is a subtle way of arguing that Nvidia’s competitive position will be durable because the problem space is not close to being solved.

    The Groq acquisition reveal is interesting for what it signals about the inference market. If Nvidia is creating a separate product tier for premium-priced, low-latency tokens, it suggests the company sees inference economics fragmenting significantly. This aligns with the broader trend of AI becoming an enterprise product where different customers have wildly different willingness to pay based on how they use tokens.

    Finally, Jensen’s refusal to diversify chip architectures is a bold bet. “We simulate it all in our simulator, provably worse” is an incredibly confident statement. History is full of companies that were right until they were not. But Nvidia’s track record of 50x generation-over-generation improvements through co-design across processors, fabric, libraries, and algorithms is hard to argue with. The question is whether the current paradigm of transformer-based models on GPU clusters represents a local or global optimum for AI compute.

  • Anthropic’s Growth Strategy Explained: $1B to $19B in 14 Months, Automating Experiments With Claude, and Why Old Playbooks Are Dead (Lenny’s Podcast Recap)

    TLDW (Too Long, Didn’t Watch)

    Amol Avasare, Head of Growth at Anthropic, sat down with Lenny Rachitsky to explain how Anthropic grew from $1 billion to over $19 billion in annual recurring revenue in just 14 months. He breaks down their internal tool called CASH (Claude Accelerates Sustainable Hypergrowth) that automates growth experimentation, why 50 to 70 percent of traditional growth playbooks are now obsolete, why the PM-to-engineer ratio may need to flip, and how Anthropic’s early bet on AI coding created a research flywheel that competitors are only now starting to copy. He also shares how he cold emailed his way into the job, why activation is the single hardest problem in AI products, and how he uses Cowork to detect team misalignment across Slack channels automatically.

    Key Takeaways

    1. Anthropic’s growth trajectory is historically unprecedented. Revenue went from $1 billion at the start of 2025 to over $19 billion ARR by February 2026. That 19x growth in 14 months dwarfs companies like Atlassian, Snowflake, and Palantir, which took 15 to 20 years to reach $4.5 to $6 billion ARR. The number Amol quoted was already outdated by the time the episode aired.

    2. Anthropic is automating growth experimentation with an internal tool called CASH. CASH stands for Claude Accelerates Sustainable Hypergrowth. The growth platform team uses Claude to identify opportunities, build experiments (mostly copy changes and minor UI tweaks so far), test them against quality and brand standards, and analyze results. Amol describes the current win rate as roughly equivalent to a junior PM with two to three years of experience, but notes it was not possible at all before Opus 4.5 and has improved significantly with Opus 4.6. Human review is still in the loop but decreasing week over week.

    3. Activation is the single highest-leverage growth problem in AI. The core challenge is capability overhang: models are improving so fast that users do not know what they can do. By the time you have tested and optimized onboarding for one model’s capabilities, the next model has already shipped with entirely new features that make your learnings obsolete. Anthropic addresses this by adding intentional friction in onboarding to understand who users are and funnel them to the right products and features.

    4. Anthropic indexes 70/30 toward big bets, the opposite of most growth teams. Traditional growth teams spend 60 to 70 percent of effort on small to medium optimizations. Anthropic flips that ratio because they believe the product value delivered two years from now will be 100x to 1,000x what it is today. In that exponential environment, micro-optimizations capture a negligible percentage of future value. Large strategic bets are where the leverage lives.

    5. The PM-to-engineer ratio may need to flip. Engineers are getting 2 to 3x more productive with tools like Claude Code, effectively turning a team of 5 engineers into the equivalent of 15 to 20. But PMs and designers have not seen the same multiplier. The result is that product management and design are “absolutely squeezed.” Anthropic is responding by hiring more PMs and deputizing product-minded engineers to act as mini-PMs on projects under two weeks. The counterintuitive insight: companies may need more PMs, not fewer, as AI accelerates engineering output.

    6. Cold emailing still works if you do it right. Amol got his job by cold emailing Mike Krieger, Anthropic’s Chief Product Officer (and co-founder of Instagram), at a time when no growth role was even listed. Key tactics: use a high-converting subject line you have tested over time, find personal email addresses instead of competing in crowded LinkedIn inboxes, keep the message extremely short, and follow up relentlessly until someone explicitly asks you to stop.

    7. PRDs are largely obsolete at Anthropic. Amol estimates that 60 to 80 percent of what his team ships does not have a formal PRD. For small projects, coordination happens entirely in Slack. For larger initiatives, he will sometimes throw his thoughts into Cowork five minutes before a kickoff meeting to generate a rough document. His default philosophy: if you can skip the doc and jump straight to prototyping or action, do it.

    8. The AI coding bet created a research flywheel. Anthropic’s deep focus on coding was not just a commercial play. A document written by co-founder Ben Mann in 2021, just months after the company was founded, laid out the case for focusing on AI coding because better coding models would accelerate their own researchers, which would produce better models, which would produce better coding tools, in a compounding loop. This is something competitors are only now starting to recognize and copy.

    9. Cowork is being used to detect organizational misalignment. Amol runs a weekly scheduled task in Cowork that uses the Slack MCP to scan conversations across the company and surface areas of potential misalignment. He describes cases where this caught teams about to do overlapping work or spin their wheels on conflicting priorities. He also uses Cowork to simulate coaching sessions with his manager, Ami Vora, by asking Claude to analyze her public writing and internal Slack activity and then deliver feedback from her perspective.

    10. Anthropic’s culture is its most defensible moat. Amol describes a culture where every single person is fully engaged, nobody is checked out, and there is radical transparency through “notebook channels” on Slack where anyone, including leadership, shares their thinking publicly. Employees openly challenge Dario Amodei in these channels after all-hands meetings. These notebook channels also serve a practical purpose: they become training data that helps Claude understand how different teams think and operate.

    Detailed Summary

    The Cold Email That Started It All

    Amol Avasare was not recruited through a job listing, a referral, or a sourcing pipeline. He cold emailed Mike Krieger, Anthropic’s CPO and the co-founder of Instagram, with a short pitch: he loved the product, thought Anthropic badly needed a growth team, and wanted to talk. At the time, Anthropic had no growth roles posted. They were just beginning to think about it internally, and the timing was perfect.

    Amol’s approach to cold email is methodical. He has a subject line formula he has refined over years of founder outreach that produces abnormally high open rates (he declined to share the exact copy). He targets personal email addresses rather than work inboxes or LinkedIn, where competition for attention is fierce. The message itself is brutally short: who he is, why he would be a fit, and a request to chat. His follow-up philosophy is to keep reaching out until someone tells him to stop. Krieger responded on the first attempt.

    What $1B to $19B in 14 Months Actually Feels Like

    From the inside, Anthropic’s growth does not feel like a victory lap. Amol describes it as the hardest job he has ever had, harder than being a founder and harder than investment banking. About 70 percent of his time goes to what the team calls “success disasters,” which are problems created by things going extremely well. All the charts are green and up and to the right, but the underlying infrastructure, processes, and systems are constantly breaking under the strain of hypergrowth.

    The revenue trajectory tells the story: $0 to $100 million in 2023, $100 million to $1 billion in 2024, $1 billion to roughly $10 billion in 2025, and already $19 billion ARR by the end of February 2026. Amol notes that at the end of 2024, Dario Amodei was pushing for growth targets that the team thought were impossible. Those targets were hit and exceeded. The internal culture has adapted accordingly. Linear charts are considered uncool. Everything is presented on a log-linear scale.

    Why Activation Is the Hardest Problem in AI

    The central growth challenge for AI products is not acquisition. It is activation: getting users to understand what the product can actually do for them. Amol frames this as a capability overhang problem. Models are improving so rapidly that even internal teams struggle to keep up with what is newly possible. If Anthropic employees have to carve out dedicated time to explore a new model’s capabilities, the average user is even further behind.

    The danger is that someone signs up for Claude, asks it about the weather, and walks away thinking that is all it does. The product development cycle for onboarding is also under strain: by the time you have run tests, gathered learnings, and shipped an optimized activation flow for one model generation, the next model has shipped with capabilities that make your work irrelevant.

    Anthropic’s approach borrows from Amol’s experience at Mercury and MasterClass. They add deliberate friction to the signup flow, asking users questions about who they are and what they want to accomplish. This allows them to route users to the right products and features. The data also feeds downstream into lifecycle marketing and ad targeting. Amol has seen this pattern work consistently across every company he has worked at: the right friction, applied at the right time, outperforms frictionless flows that dump users into a blank canvas with no guidance.

    The CASH System: Automating Growth Experimentation

    Anthropic’s growth platform team, led by Alexey Komissarouk (who teaches growth engineering at Reforge), has built an internal system called CASH. The name stands for Claude Accelerates Sustainable Hypergrowth.

    CASH operates on a four-stage loop. First, Claude identifies growth opportunities by analyzing trends, metrics, and past experiment results. Second, Claude builds the actual feature or change. Third, Claude tests the output against quality and brand standards. Fourth, Claude analyzes the results and gathers learnings after the experiment ships.

    Currently, CASH handles mostly copy changes and minor UI tweaks. The win rate is comparable to a junior PM with two to three years of experience. A senior PM would still do better. But the trajectory matters: this was not possible at all before Opus 4.5 launched, and results have improved meaningfully with Opus 4.6. Human approval is still required before shipping, but the amount of human time spent reviewing is decreasing week over week.

    The part that Claude still cannot handle well is cross-functional stakeholder management. Getting six people in a room to align on a decision remains a fundamentally human problem. As Amol’s head of design joked: “We will have AGI and it will still be impossible to get six people in a room to get aligned.”

    Why the PM-to-Engineer Ratio Might Flip

    This is one of the most counterintuitive insights from the conversation. The conventional assumption is that AI will reduce the need for PMs. Amol argues the opposite: companies may need more PMs, at least in the near term.

    The math is straightforward. Tools like Claude Code are making engineers 2 to 3x more productive. A team of 5 engineers now produces the output equivalent of 15 to 20 engineers in the pre-AI era. PMs and designers have seen productivity gains, but not at the same multiplier. The result is a bottleneck: one PM managing the equivalent output of 15 to 20 engineers worth of work, while also handling cross-functional coordination, stakeholder alignment, and strategic direction.

    Anthropic’s response is twofold. First, they are actively hiring more PMs. Second, they have formalized a system where product-minded engineers act as mini-PMs on any project that is two engineering weeks or less. The engineer handles everything: talking to legal, talking to security, managing stakeholders. The PM only steps in if things go badly off track.

    For larger projects, the PM remains squarely accountable. But the key insight is about leverage: if you are one PM managing 20 engineers, the highest-value use of your time is not shipping the 21st feature yourself. It is getting 5 percent better at guiding the team on what the right opportunities are and upleveling every engineer’s product thinking.

    The Coding Flywheel That Changed Everything

    Anthropic’s deep bet on coding was not obvious at the time. A document from co-founder Ben Mann, dated 2021, laid out the strategic logic just months after the company was founded. The argument was that investing heavily in AI coding would create a compounding flywheel: better coding models would help Anthropic’s own researchers write code more effectively, which would accelerate model development, which would produce even better coding tools.

    This early focus gave Anthropic a structural advantage that competitors are only now trying to replicate. It also explains why the company went so deep on B2B and enterprise use cases rather than chasing consumer attention. The commercial opportunity of coding was large on its own, but the internal research acceleration made it doubly strategic.

    Amol notes that this focus was partly born from constraint. Anthropic was the smallest, least well-funded player in the space for years. They did not have Meta’s distribution or Google’s cash flow or OpenAI’s first-mover advantage. That constraint forced extreme focus, which is a principle Amol applies broadly. He calls it “freedom through constraints.”

    How Amol Uses AI to Manage His Day

    Amol’s personal AI usage is extensive and worth documenting for anyone looking to see how a power user at the frontier actually operates.

    Every morning, a scheduled Cowork task reviews 20 to 25 charts across Anthropic’s products and sends him a summary of what needs attention. The false positive and false negative rates are improving week over week, giving him increasing confidence in delegating this monitoring.

    He uses Cowork to handle administrative tasks he hates: booking meeting rooms, first-pass email triage, filing expense reports in Brex and reimbursements in Benpass. None of this requires his attention anymore.

    For management, he runs weekly Cowork tasks that review what his direct reports have done, cross-reference their work against team OKRs and meeting transcripts, and surface feedback he should give them. He also runs a parallel task for himself, asking Claude to impersonate his manager Ami Vora based on her public writing and internal Slack activity, and deliver feedback from her perspective.

    Perhaps most powerfully, he runs a weekly misalignment detection task that scans Slack conversations across the company and surfaces areas where teams may be working at cross purposes. He describes cases where this caught potentially expensive coordination failures before they compounded.

    Notebook Channels and the Culture Moat

    Anthropic uses “notebook channels” on Slack, which function like internal Twitter feeds where employees share their thinking, priorities, and provocative ideas publicly. Everyone has one, from researchers to growth PMs to Dario Amodei himself. Employees openly disagree with leadership in these channels, and that is encouraged.

    These channels serve a dual purpose. First, they help scale cultural values and operating principles as the company grows rapidly. When Amol posts about “the importance of being comfortable leaving money on the table,” every new engineer on the growth team absorbs that principle. Second, and perhaps more importantly for the long term, these channels become structured context that Claude can reference. The HR team has even documented which internal documents Claude should reference for specific topics. Amol sees this as something every company will eventually need to do: share thinking in a structured way so that the AI agents running throughout the organization have the context they need.

    AI Safety as Commercial Strategy

    Anthropic is structured as a Public Benefit Corporation (PBC), not a standard Delaware C-Corp. This legally allows the company to optimize for public benefit rather than being bound solely to maximize shareholder value.

    Amol says the company has repeatedly taken significant commercial hits for safety reasons, including delaying product launches when safety risks were identified. He also makes a striking claim: what Anthropic says publicly about AI risk is actually a softer version of what they believe internally. The internal view on the potential downsides of powerful AI is more aggressive than the public messaging.

    From a growth perspective, Amol frames safety as a long-term competitive advantage. Growth teams at most companies try to squeeze every last dollar. Anthropic’s growth team is “very comfortable forgoing metric impact” to protect brand, quality, and safety. He argues this is how all the best products operate, and that as the stakes of AI get higher, Anthropic’s credible commitment to safety will become a moat.

    Advice for Thriving in the AI Era

    Amol’s advice for product managers and growth practitioners boils down to four points. First, stay on top of the tools. Try every new model release. Something that did not work three months ago may work now, and you will not know unless you go back and test it. Second, go deep on your unique spike rather than trying to be well-rounded. The PM who can also design is a unicorn. The engineer who thinks like a PM is a unicorn. Find your interdisciplinary edge and double down. Third, be radically adaptable. Amol estimates that 50 to 70 percent of how you operated in the past is now irrelevant. Clinging to old playbooks creates friction. Fourth, think in exponentials, not linear projections. If you are looking at the AI landscape through a linear lens, you will consistently underestimate how quickly things are moving.

    Thoughts

    This interview is one of the most information-dense conversations about growth strategy in AI that has been published so far. A few things stand out.

    The CASH system is the most concrete example yet of a company using AI to automate its own growth loop. The fact that it currently performs at a junior PM level is almost beside the point. What matters is the trajectory: it went from impossible to functional in a few months. If models continue improving at their current pace, this system will be operating at a senior PM level within a year. Every growth team at every AI company should be building their own version of this right now.

    The PM ratio insight is genuinely surprising and underreported. The default assumption in the tech industry is that AI will reduce headcount across all functions. Amol is making the case that in the near term, the opposite is true for PMs. Engineering output is exploding, and someone needs to direct all that output toward the right problems. That is a fundamentally human, organizational, political job that AI is not close to automating.

    The coding flywheel story is also worth highlighting because it shows the power of strategic focus in a world of unlimited possibilities. Anthropic had a generalist technology that could do almost anything, and they deliberately narrowed their focus to one vertical. That decision, made in 2021 before anyone knew what the market would look like, is arguably the single most important strategic bet in the company’s history.

    Finally, the notebook channels concept deserves more attention. The idea that employees should share their thinking in structured, searchable formats is not just a culture tool. It is an infrastructure investment for an AI-native future where agents need organizational context to be effective. Companies that build this habit early will have a significant advantage when agent-driven workflows become the norm.

    The uncomfortable subtext of this entire conversation is that Anthropic’s growth team, as talented as they clearly are, is riding a wave created almost entirely by the research team. Several YouTube commenters pointed this out, and Amol himself acknowledges it directly. The models are the product. The growth team’s job is to make sure users discover and adopt what the models can do. That is not a small job, especially at this scale, but it is a fundamentally different job than driving growth at a product that does not sell itself.

  • Jensen Huang on Lex Fridman: NVIDIA’s CEO Reveals His Vision for the AI Revolution, Scaling Laws, and Why Intelligence Is Now a Commodity

    A deep breakdown of Lex Fridman Podcast #494 featuring Jensen Huang, CEO of NVIDIA, covering extreme co-design, the four AI scaling laws, CUDA’s origin story, the future of programming, AGI timelines, and what it takes to lead the world’s most valuable company.

    TLDW (Too Long, Didn’t Watch)

    Jensen Huang sat down with Lex Fridman for a sprawling two-and-a-half-hour conversation covering the full arc of NVIDIA’s evolution from a GPU gaming company to the engine of the AI revolution. Jensen explains how NVIDIA now thinks in terms of rack-scale and pod-scale computing rather than individual chips, breaks down his four AI scaling laws (pre-training, post-training, test time, and agentic), and reveals the near-existential bet the company made putting CUDA on GeForce. He shares his views on China’s tech ecosystem, his deep respect for TSMC, why he turned down the chance to become TSMC’s CEO, how Elon Musk’s systems engineering approach built Colossus in record time, and why he believes AGI already exists. He also discusses why the future of programming is really about “specification,” why intelligence is being commoditized while humanity is the true superpower, and how he manages the enormous pressure of leading a company that nations and economies depend on. His core message: do not let the democratization of intelligence cause you anxiety. Instead, let it inspire you.

    Key Takeaways

    1. NVIDIA No Longer Thinks in Chips. It Thinks in AI Factories.

    Jensen’s mental model of what NVIDIA builds has fundamentally changed. He no longer picks up a chip to represent a new product generation. Instead, his mental model is a gigawatt-scale AI factory with power generation, cooling systems, and thousands of engineers bringing it online. The unit of computing at NVIDIA has evolved from GPU to computer to cluster to AI factory. His next mental “click” is planetary-scale computing.

    2. Extreme Co-Design Is NVIDIA’s Secret Weapon

    The reason NVIDIA dominates is not just better GPUs. It is the extreme co-design of the entire stack: GPU, CPU, memory, networking, switching, power, cooling, storage, software, algorithms, and applications. Jensen explains that when you distribute workloads across tens of thousands of computers and want them to go a million times faster (not just 10,000 times), every single component becomes a bottleneck. This is a restatement of Amdahl’s Law at scale. NVIDIA’s organizational structure directly reflects this co-design philosophy. Jensen has 60+ direct reports, holds no one-on-ones, and runs every meeting as a collective problem-solving session where specialists across all domains are present and contribute.

    3. The Four AI Scaling Laws Are a Flywheel

    Jensen outlined four distinct scaling laws that form a continuous loop:

    Pre-training scaling: Larger models plus more data equals smarter AI. The industry panicked when people said data was running out, but synthetic data generation has removed that ceiling. Data is now limited by compute, not by human generation.

    Post-training scaling: Fine-tuning, reinforcement learning from human feedback, and curated data continue to scale AI capabilities beyond what pre-training alone achieves.

    Test-time scaling: Inference is not “easy” as many predicted. It is thinking, reasoning, planning, and search. It is far more compute-intensive than memorization and pattern matching. This is why inference chips cannot be commoditized the way many predicted.

    Agentic scaling: A single AI agent can spawn sub-agents, creating teams. This is like scaling a company by hiring more employees rather than trying to make one person faster. The experiences generated by agents feed back into pre-training, creating a flywheel.

    4. The CUDA Bet Nearly Killed NVIDIA

    Putting CUDA on GeForce was one of the most consequential technology decisions in modern history. It increased GPU costs by roughly 50%, which crushed the company’s gross margins at a time when NVIDIA was a 35% gross margin business. The company’s market cap dropped from around $7-8 billion to approximately $1.5 billion. But Jensen understood that install base defines a computing architecture, not elegance. He pointed to x86 as proof: a less-than-elegant architecture that defeated beautifully designed RISC alternatives because of its massive install base. CUDA on GeForce put a supercomputer in the hands of every researcher, every scientist, every student. It took a decade to recover, but that install base became the foundation of the deep learning revolution.

    5. NVIDIA’s Moat Is Trust, Velocity, and Install Base

    Jensen was direct about NVIDIA’s competitive advantage. The CUDA install base is the number one asset. Developers target CUDA first because it reaches hundreds of millions of computers, is in every cloud, every OEM, every country, every industry. NVIDIA ships a new architecture roughly every year. No company in history has built systems of this complexity at this cadence. And the trust that NVIDIA will maintain, improve, and optimize CUDA indefinitely is something developers can count on. If someone created “GUDA” or “TUDA” tomorrow, it would not matter. The install base, velocity of execution, ecosystem breadth, and earned trust create a compounding advantage that is nearly impossible to replicate.

    6. Jensen Believes AGI Is Already Here

    When asked about AGI timelines, Jensen said he believes AGI has been achieved. His reasoning is practical: an agentic system today could plausibly create a web service, achieve virality, and generate a billion dollars in revenue, even if temporarily. This is not meaningfully different from many internet-era companies that did the same thing with technology no more sophisticated than what current AI agents can produce. He does not believe 100,000 agents could build another NVIDIA, but he believes a single agent-driven viral product is within reach right now.

    7. The Future of Programming Is Specification, Not Syntax

    Jensen believes the number of programmers in the world will increase dramatically, not decrease. His reasoning: the definition of coding is expanding to include specification and architectural description in natural language. This expands the population of “coders” from roughly 30 million professional developers to potentially a billion people. Every carpenter, plumber, accountant, and farmer who can describe what they want a computer to build is now a coder. The artistry of the future is knowing where on the spectrum of specification to operate, from highly prescriptive to exploratory and open-ended.

    8. China Is the Fastest Innovating Country in the World

    Jensen gave a nuanced and detailed explanation of why China’s tech ecosystem is so formidable. About 50% of the world’s AI researchers are Chinese. China’s tech industry emerged during the mobile cloud era, so it was built on modern software from the start. The country’s provincial competition creates an insane internal competitive environment. And the cultural norm of knowledge-sharing through school and family networks means China effectively operates as an open-source ecosystem at all times. This is why Chinese companies contribute disproportionately to open source. Their engineers’ brothers, friends, and schoolmates work at competing companies, and sharing knowledge is the cultural default.

    9. The Power Grid Has Enormous Waste That AI Can Exploit

    Jensen proposed a pragmatic solution to the energy problem for AI data centers. Power grids are designed for worst-case conditions with margin, but 99% of the time they run at around 60% of peak capacity. That idle capacity is simply wasted. Jensen wants data centers to negotiate flexible contracts where they absorb excess power most of the time and gracefully degrade during rare peak demand periods. This requires three things: customers accepting that “six nines” uptime may not always be necessary, data centers that can dynamically shift workloads, and utilities that offer tiered power delivery contracts instead of all-or-nothing commitments.

    10. Jensen Turned Down the CEO Role at TSMC

    In 2013, TSMC founder Morris Chang offered Jensen the chance to become CEO of TSMC. Jensen confirmed the story is true and said he was deeply honored. But he had already envisioned what NVIDIA could become and felt it was his sole responsibility to make that vision happen. He sees the relationship with TSMC as one built on three decades of trust, hundreds of billions of dollars in business, and zero formal contracts.

    11. Elon Musk’s Systems Engineering Approach Is Instructive

    Jensen praised Elon Musk’s approach to building the Colossus supercomputer in Memphis in just four months. He highlighted several principles: Elon questions everything relentlessly, strips every process down to the minimum necessary, is physically present at the point of action, and his personal urgency creates urgency in every supplier. Jensen drew a parallel to NVIDIA’s own “speed of light” methodology, where every process is benchmarked against the physical limits of what is possible, not against historical baselines.

    12. Intelligence Is a Commodity. Humanity Is Not.

    Perhaps the most philosophical takeaway from the conversation: Jensen argued that intelligence is a functional, measurable thing that is being commoditized. He surrounded himself with 60 direct reports who are all “superhuman” in their respective domains, more educated and deeper in their specialties than he is. Yet he sits in the middle orchestrating all of them. This proves that intelligence alone does not determine success. Character, compassion, grit, determination, tolerance for embarrassment, and the ability to endure suffering are the real differentiators. Jensen wants the audience to understand that the word we should elevate is not intelligence but humanity.

    Detailed Summary

    From GPU Maker to AI Infrastructure Company

    The conversation opened with Jensen explaining NVIDIA’s evolution from chip-scale to rack-scale to pod-scale design. The Vera Rubin pod, announced at GTC, contains seven chip types, five purpose-built rack types, 40 racks, 1.2 quadrillion transistors, nearly 20,000 NVIDIA dies, over 1,100 Rubin GPUs, 60 exaflops of compute, and 10 petabytes per second of scale bandwidth. And that is just one pod. NVIDIA plans to produce roughly 200 of these pods per week.

    Jensen explained that extreme co-design is necessary because the problems AI must solve no longer fit inside a single computer. When you distribute a workload across 10,000 computers but want a million-fold speedup, everything becomes a bottleneck: computation, networking, switching, memory, power, cooling. This is fundamentally an Amdahl’s Law problem at planetary scale. If computation represents only 50% of the workload, speeding it up infinitely only doubles total throughput. Every layer must be co-optimized simultaneously.

    NVIDIA’s organizational structure is a direct reflection of this co-design philosophy. Jensen has more than 60 direct reports, almost all with deep engineering expertise. He does not do one-on-ones. Every meeting is a collective problem-solving session where the memory expert, the networking expert, the cooling expert, and the power delivery expert are all in the room together, attacking the same problem.

    The Strategic History of CUDA

    Jensen walked through the step-by-step journey from graphics accelerator to computing platform. The company invented a programmable pixel shader, then added IEEE-compatible FP32 to its shaders, then put C on top of that (called Cg), and eventually arrived at CUDA. The critical strategic decision was putting CUDA on GeForce, a consumer product.

    This was nearly an existential move. It increased GPU costs by roughly 50% and consumed all of the company’s gross profit at a time when NVIDIA was a 35% gross margin business. The market cap cratered from around $7-8 billion to approximately $1.5 billion. But Jensen understood a principle that many technologists overlook: install base defines a computing architecture. x86 survived not because it was elegant but because it was everywhere. CUDA on GeForce put a supercomputing capability in the hands of every gamer, every student, every researcher who built their own PC. When the deep learning revolution arrived, CUDA was already the foundation.

    How Jensen Leads and Makes Decisions

    Jensen described a leadership philosophy built on continuous reasoning in public. He does not make announcements in the traditional sense. Instead, he shapes the belief systems of his employees, board, partners, and the broader industry over months and years by reasoning through decisions step by step, using every new piece of external information as a brick in the foundation. By the time he formally announces a strategic direction, the reaction is not surprise but rather, “What took you so long?”

    He applies this same approach to his supply chain. He personally visits CEOs of DRAM companies, packaging companies, and infrastructure providers. He explains the dynamics of the industry, shares his vision of future demand, and helps them reason through why they should make multi-billion-dollar capital investments. Three years ago, he convinced DRAM CEOs that HBM memory would become mainstream for data centers, which sounded ridiculous at the time. Those companies had record years as a result.

    Jensen’s “speed of light” methodology is his framework for decision-making. Every process, every design, every cost is benchmarked against the physical limits of what is theoretically possible. He prefers this to continuous improvement, which he views as incrementalism. He would rather strip a 74-day process back to zero and ask, “If we built this from scratch today, how long would it take?” Often the answer is six days, and the remaining 68 days are filled with accumulated compromises that can be challenged individually.

    AI Scaling Laws and the Future of Compute

    Jensen broke down the four scaling laws in detail. The pre-training scaling law, which depends on model size and data volume, was thought to be hitting a wall when the industry worried about running out of high-quality human-generated data. Jensen argued this concern is misplaced. Synthetic data generation has effectively removed the ceiling, and the constraint is now compute, not data.

    Post-training continues to scale through fine-tuning and reinforcement learning. Test-time scaling was the most counterintuitive for the industry. Many predicted that inference would be “easy” and that inference chips would be small, cheap, and commoditized. Jensen saw this as fundamentally wrong. Inference is thinking: reasoning, planning, search, decomposing novel problems into solvable pieces. Thinking is much harder than reading, and test-time compute is intensely resource-hungry.

    Agentic scaling is the newest frontier. A single AI agent can spawn sub-agents, effectively multiplying intelligence the way a company scales by hiring. The experiences and data generated by agentic systems feed back into pre-training, creating a continuous improvement loop. Jensen described this as the reason NVIDIA designed the Vera Rubin rack architecture differently from the Grace Blackwell architecture. Grace Blackwell was optimized for running large language models. Vera Rubin is designed for agents, which need to access files, use tools, do research, and spin off sub-agents. NVIDIA anticipated this architectural shift two and a half years before tools like OpenClaw arrived.

    China, TSMC, and the Global Supply Chain

    Jensen provided a thoughtful analysis of China’s tech ecosystem. He identified several structural advantages: 50% of the world’s AI researchers are Chinese, the tech industry was born during the mobile cloud era (making it natively modern), provincial competition creates internal Darwinian pressure, and the culture of knowledge-sharing through school and family networks makes China effectively open-source by default.

    On TSMC, Jensen emphasized that the deepest misunderstanding about the company is that its technology is its only advantage. Their manufacturing orchestration system, which dynamically manages the shifting demands of hundreds of companies, is “completely miraculous.” Their culture uniquely balances bleeding-edge technology excellence with world-class customer service. And the trust that Jensen places in TSMC is extraordinary: three decades of partnership, hundreds of billions of dollars in business, and no formal contract.

    Jensen also discussed the AI supply chain more broadly. NVIDIA has roughly 200 suppliers contributing technology to each rack. Jensen personally manages these relationships, flying to supplier sites, explaining industry dynamics, and helping CEOs reason through multi-billion-dollar investment decisions. When asked if supply chain bottlenecks keep him up at night, he said no, because he has already communicated what NVIDIA needs, his partners have told him what they will deliver, and he believes them.

    The Energy Challenge and Space Computing

    On the energy front, Jensen proposed a practical approach to the power problem. Rather than waiting for new power generation, he wants to capture the enormous waste already present in the grid. Power infrastructure is designed for worst-case peak demand, but 99% of the time it runs far below capacity. AI data centers could absorb this excess capacity with flexible contracts that allow graceful degradation during rare peak periods.

    On space computing, NVIDIA already has GPUs in orbit for satellite imaging. Jensen acknowledged the cooling challenge (no conduction or convection in space, only radiation) but sees it as a future frontier worth cultivating. In the meantime, he is focused on the lower-hanging fruit of eliminating waste in the terrestrial power grid.

    On AGI, Jobs, and the Human Future

    Jensen stated directly that he believes AGI has been achieved, at least by the practical definition of an AI system capable of creating a billion-dollar company. He sees it as plausible that an agent could build a viral web service that briefly generates enormous revenue, just as many internet-era companies did with technology no more sophisticated than what current AI agents produce.

    On jobs, Jensen was both compassionate and clear-eyed. He told the story of radiology: computer vision became superhuman around 2019-2020, and the prediction was that radiologists would disappear. Instead, the number of radiologists grew because AI allowed them to study more scans, diagnose better, and serve more patients. The purpose of the job (diagnosing disease) did not change, even though the tools changed completely.

    He applied this principle broadly: the number of software engineers at NVIDIA will grow, not decline, because their purpose is solving problems, not writing lines of code. The number of programmers globally will grow because the definition of coding is expanding to include natural language specification, opening it up to potentially a billion people.

    His advice to anyone worried about their job is straightforward: go use AI now. Become expert in it. Every profession, from carpenter to pharmacist to lawyer, will be elevated by AI tools. The people who learn to use AI will be the ones who get hired, promoted, and empowered.

    Mortality, Succession, and Legacy

    The conversation closed with deeply personal reflections. Jensen said he really does not want to die. He sees the current moment as a “once in a humanity experience.” He does not believe in traditional succession planning. Instead, he believes the best succession strategy is to pass on knowledge continuously, every single day, in every meeting, as fast as possible. His hope is to die on the job, instantaneously, with no long period of suffering.

    He described a vision for a kind of digital continuity: sending a humanoid robot into space, continuously improving it in flight, and eventually uploading the consciousness derived from a lifetime of communications, decisions, and reasoning to catch up with it at the speed of light.

    On the emotional experience of leading NVIDIA, Jensen was candid about hitting psychological low points regularly. His coping mechanism is decomposition: break the problem into pieces, reason about what you can control, tell someone who can help, share the burden, and then deliberately forget what is behind you. He compared this to the mental discipline of great athletes who focus only on the next point.

    His final message was about the relationship between intelligence and humanity. Intelligence, he argued, is functional. It is being commoditized. Humanity, character, compassion, grit, tolerance for embarrassment, and the capacity for suffering are the true superpowers. The word society should elevate is not intelligence but humanity.

    Thoughts

    This is one of the most substantive CEO interviews of 2026. What makes it remarkable is not just the breadth of topics but the depth of reasoning Jensen demonstrates in real time. You can actually watch him think through problems on the spot, which is rare for someone at his level.

    A few things stand out. First, the CUDA origin story is one of the great strategic narratives in tech history. The decision to absorb a 50% cost increase on a consumer product, watching your market cap collapse by 80%, and holding the course for a decade because you understood the power of install base is the kind of conviction that separates generational companies from everyone else.

    Second, Jensen’s framing of the four scaling laws as a flywheel is the clearest articulation anyone has given of why AI compute demand will continue to accelerate. Most people understand pre-training. Fewer understand test-time scaling. Almost nobody is thinking about agentic scaling as a compute multiplier. Jensen has been thinking about it for years and already designed hardware for it before the software ecosystem caught up.

    Third, the discussion on jobs deserves attention. The radiology example is powerful because it is a completed experiment, not a prediction. The profession that was supposed to be eliminated first by AI instead grew. The mechanism is straightforward: when you automate the task, you expand the capacity of the purpose, and demand for the purpose increases. This does not mean there will be no pain or dislocation. Jensen acknowledged that explicitly. But the historical pattern is clear.

    Finally, the philosophical distinction between intelligence and humanity is the kind of framing that could genuinely help people navigate the anxiety of this moment. If you define your value by your intelligence alone, AI commoditization is terrifying. If you define your value by your character, your compassion, your tolerance for suffering, and your willingness to keep going when everything goes wrong, then AI is just the most powerful set of tools you have ever been given.

    Jensen Huang is 62 years old, has been running NVIDIA for 34 years, and shows no signs of slowing down. If anything, his conviction about the future is accelerating alongside his company’s growth.

    Watch the full episode: Lex Fridman Podcast #494 with Jensen Huang

  • Andrej Karpathy on AutoResearch, AI Agents, and Why He Stopped Writing Code: Full Breakdown of His 2026 No Priors Interview

    TL;DW

    Andrej Karpathy sat down with Sarah Guo on the No Priors podcast (March 2026) and delivered one of the most information-dense conversations about the current state of AI agents, autonomous research, and the future of software engineering. The core thesis: since December 2025, Karpathy has essentially stopped writing code by hand. He now “expresses his will” to AI agents for 16 hours a day, and he believes we are entering a “loopy era” where autonomous systems can run experiments, train models, and optimize hyperparameters without a human in the loop. His project AutoResearch proved this works by finding improvements to a model he had already hand-tuned over two decades of experience. The conversation also covers the death of bespoke apps, the future of education, open vs. closed source models, robotics, job market impacts, and why Karpathy chose to stay independent from frontier labs.

    Key Takeaways

    1. The December 2025 Shift Was Real and Dramatic

    Karpathy describes a hard flip that happened in December 2025 where he went from writing 80% of his own code to writing essentially none of it. He says the average software engineer’s default workflow has been “completely different” since that month. He calls this state “AI psychosis” and says he feels anxious whenever he is not at the forefront of what is possible with these tools.

    2. AutoResearch: Agents That Do AI Research Autonomously

    AutoResearch is Karpathy’s project where an AI agent is given an objective metric (like validation loss), a codebase, and boundaries for what it can change. It then loops autonomously, running experiments, tweaking hyperparameters, modifying architectures, and committing improvements without any human in the loop. When Karpathy ran it overnight on a model he had already carefully tuned by hand over years, it found optimizations he had missed, including forgotten weight decay on value embeddings and insufficiently tuned Adam betas.

    3. The Name of the Game Is Removing Yourself as the Bottleneck

    Karpathy frames the current era as a shift from optimizing your own productivity to maximizing your “token throughput.” The goal is to arrange tasks so that agents can run autonomously for extended periods. You are no longer the worker. You are the orchestrator, and every minute you spend in the loop is a minute the system is held back.

    4. Mastery Now Means Managing Multiple Agents in Parallel

    The vision of mastery is not writing better code. It is managing teams of agents simultaneously. Karpathy references Peter Steinberg’s workflow of having 10+ Codex agents running in parallel across different repos, each taking about 20 minutes per task. You move in “macro actions” over your codebase, delegating entire features rather than writing individual functions.

    5. Personality and Soul Matter in Coding Agents

    Karpathy praises Claude’s personality, saying it feels like a teammate who gets excited about what you are building. He contrasts this with Codex, which he calls “very dry” and disengaged. He specifically highlights that Claude’s praise feels earned because it does not react equally to half-baked ideas and genuinely good ones. He credits Peter (OpenClaw) with innovating on the “soul” of an agent through careful prompt design, memory systems, and a unified WhatsApp interface.

    6. Apps Are Dead. APIs and Agents Are the Future.

    Karpathy built “Dobby the Elf Claw,” a home automation agent that controls his Sonos, lights, HVAC, shades, pool, spa, and security cameras through natural language over WhatsApp. He did this by having agents scan his local network, reverse-engineer device APIs, and build a unified dashboard. His conclusion: most consumer apps should not exist. Everything should be API endpoints that agents can call on behalf of users. The “customer” of software is increasingly the agent, not the human.

    7. AutoResearch Could Become a Distributed Computing Project

    Karpathy envisions an “AutoResearch at Home” model inspired by SETI@home and Folding@home. Because it is expensive to find code optimizations but cheap to verify them (just run the training and check the metric), untrusted compute nodes on the internet could contribute experimental results. He draws an analogy to blockchain: instead of blocks you have commits, instead of proof of work you have expensive experimentation, and instead of monetary reward you have leaderboard placement. He speculates that a global swarm of agents could potentially outperform frontier labs.

    8. Education Is Being Redirected Through Agents

    Karpathy describes his MicroGPT project, a 200-line distillation of LLM training to its bare essence. He says he started to create a video walkthrough but realized that is no longer the right format. Instead, he now “explains things to agents,” and the agents can then explain them to individual humans in their own language, at their own pace, with infinite patience. He envisions education shifting to “skills” (structured curricula for agents) rather than lectures or guides for humans directly.

    9. The Jaggedness Problem Is Still Real

    Karpathy describes current AI agents as simultaneously feeling like a “brilliant PhD student who has been a systems programmer their entire life” and a 10-year-old. He calls this “jaggedness,” and it stems from reinforcement learning only optimizing for verifiable domains. Models can move mountains on agentic coding tasks but still tell the same bad joke they told four years ago (“Why don’t scientists trust atoms? Because they make everything up.”). Things outside the RL reward loop remain stuck.

    10. Open Source Is Healthy and Necessary, Even If Behind

    Karpathy estimates open source models are now roughly 6 to 8 months behind closed frontier models, down from 18 months and narrowing. He draws a parallel to Linux: the industry has a structural need for a common, open platform. He is “by default very suspicious” of centralization and wants more labs, more voices in the room, and an “ensemble” approach to AI governance. He thinks it is healthy that open source exists slightly behind the frontier, eating through basic use cases while closed models handle “Nobel Prize kind of work.”

    11. Digital Transformation Will Massively Outpace Physical Robotics

    Karpathy predicts a clear ordering: first, a massive wave of “unhobling” in the digital space where everything gets rewired and made 100x more efficient. Then, activity moves to the interface between digital and physical (sensors, cameras, lab equipment). Finally, the physical world itself transforms, but on a much longer timeline because “atoms are a million times harder than bits.” He notes that robotics requires enormous capital expenditure and conviction, and most self-driving startups from 10 years ago did not survive long term.

    12. Why Karpathy Stays Independent From Frontier Labs

    Karpathy gives a nuanced answer about why he is not working at a frontier lab. He says employees at these labs cannot be fully independent voices because of financial incentives and social pressure. He describes this as a fundamental misalignment: the people building the most consequential technology are also the ones who benefit most from it financially. He values being “more aligned with humanity” outside the labs, though he acknowledges his judgment will inevitably drift as he loses visibility into what is happening at the frontier.

    Detailed Summary

    The AI Psychosis and the End of Hand-Written Code

    The conversation opens with Karpathy describing what he calls a state of perpetual “AI psychosis.” Since December 2025, he has not typed a line of code. The shift was not gradual. It was a hard flip from doing 80% of his own coding to doing almost none. He compares the anxiety of unused agent capacity to the old PhD feeling of watching idle GPUs. Except now, the scarce resource is not compute. It is tokens, and you feel the pressure to maximize your token throughput at all times.

    He describes the modern workflow: you have multiple coding agents (Claude Code, Codex, or similar harnesses) running simultaneously across different repositories. Each agent takes about 20 minutes on a well-scoped task. You delegate entire features, review the output, and move on. The job is no longer typing. It is orchestration. And when it does not work, the overwhelming feeling is that it is a “skill issue,” not a capability limitation.

    Karpathy says most people, even his own parents, do not fully grasp how dramatic this shift has been. The default workflow of any software engineer sitting at a desk today is fundamentally different from what it was six months ago.

    AutoResearch: Closing the Loop on AI Research

    The centerpiece of the conversation is AutoResearch, Karpathy’s project for fully autonomous AI research. The setup is deceptively simple: give an agent an objective metric (like validation loss on a language model), a codebase to modify, and boundaries for what it can change. Then let it loop. It generates hypotheses, runs experiments, evaluates results, and commits improvements. No human in the loop.

    Karpathy was surprised it worked as well as it did. He had already hand-tuned his NanoGPT-derived training setup over years using his two decades of experience. When he let AutoResearch run overnight, it found improvements he had missed. The weight decay on value embeddings was forgotten. The Adam optimizer betas were not sufficiently tuned. These are the kinds of things that interact with each other in complex ways that a human researcher might not systematically explore.

    The deeper insight is structural: everything around frontier-level intelligence is about extrapolation and scaling laws. You do massive exploration on smaller models and then extrapolate to larger scales. AutoResearch is perfectly suited for this because the experimentation is expensive but the verification is cheap. Did the validation loss go down? Yes or no.

    Karpathy envisions this scaling beyond a single machine. His “AutoResearch at Home” concept borrows from distributed computing projects like Folding@home. Because verification is cheap but search is expensive, you can accept contributions from untrusted workers across the internet. He draws a blockchain analogy: commits instead of blocks, experimentation as proof of work, leaderboard placement as reward. A global swarm of agents contributing compute could, in theory, rival frontier labs that have massive but centralized resources.

    The Claw Paradigm and the Death of Apps

    Karpathy introduces the concept of the “claw,” a persistent, looping agent that operates in its own sandbox, has sophisticated memory, and works on your behalf even when you are not watching. This goes beyond a single chat session with an AI. A claw has persistence, autonomy, and the ability to interact with external systems.

    His personal example is “Dobby the Elf Claw,” a home automation agent that controls his entire smart home through WhatsApp. The agent scanned his local network, found his Sonos speakers, reverse-engineered the API, and started playing music in three prompts. It did the same for his lights, HVAC, shades, pool, spa, and security cameras (using a Qwen vision model for change detection on camera feeds).

    The broader point is that this renders most consumer apps unnecessary. Why maintain six different smart home apps when a single agent can call all the APIs directly? Karpathy argues the industry needs to reconfigure around the idea that the customer is increasingly the agent, not the human. Everything should be exposed API endpoints. The intelligence layer (the LLM) is the glue that ties it all together.

    He predicts this will become table stakes within a few years. Today it requires vibe coding and direct agent interaction. Soon, even open source models will handle this trivially. The barrier will come down until every person has a claw managing their digital life through natural language.

    Model Jaggedness and the Limits of Reinforcement Learning

    One of the most technically interesting sections covers what Karpathy calls “jaggedness.” Current AI models are simultaneously superhuman at verifiable tasks (coding, math, structured reasoning) and surprisingly mediocre at anything outside the RL reward loop. His go-to example: ask any frontier model to tell you a joke, and you will get the same one from four years ago. “Why don’t scientists trust atoms? Because they make everything up.” The models have improved enormously, but joke quality has not budged because it is not being optimized.

    This jaggedness creates an uncanny valley in interaction. Karpathy describes the experience as talking to someone who is simultaneously a brilliant PhD systems programmer and a 10-year-old. Humans have some variance in ability across domains, but nothing like this. The implication is that the narrative of “general intelligence improving across all domains for free as models get smarter” is not fully accurate. There are blind spots, and they cluster around anything that lacks objective evaluation criteria.

    He and Sarah Guo discuss whether this should lead to model “speciation,” where specialized models are fine-tuned for specific domains rather than one monolithic model trying to be good at everything. Karpathy thinks speciation makes sense in theory (like the diversity of brains in the animal kingdom) but says the science of fine-tuning without losing capabilities is still underdeveloped. The labs are still pursuing monocultures.

    Open Source, Centralization, and Power Balance

    Karpathy, a long-time open source advocate, estimates the gap between closed and open source models has narrowed from 18 months to roughly 6 to 8 months. He draws a direct parallel to Linux: despite closed alternatives like Windows and macOS, the industry structurally needs a common open platform. Linux runs on 60%+ of computers because businesses need a shared foundation they feel safe using.

    The challenge for open source AI is capital expenditure. Training frontier models is astronomically expensive, and that is where the comparison to Linux breaks down somewhat. But Karpathy argues the current dynamic is actually healthy: frontier labs push the bleeding edge with closed models, open source follows 6 to 8 months behind, and that trailing capability is still enormously powerful for the vast majority of use cases.

    He expresses deep skepticism about centralization, citing his Eastern European background and the historical track record of concentrated power. He wants more labs, more independent voices, and an “ensemble” approach to decision-making about AI’s future. He worries about the current trend of further consolidation even among the top labs.

    The Job Market: Digital Unhobling and the Jevons Paradox

    Karpathy recently published an analysis of Bureau of Labor Statistics jobs data, color-coded by which professions primarily manipulate digital information versus physical matter. His thesis: digital professions will be transformed first and fastest because bits are infinitely easier to manipulate than atoms. He calls this “unhobling,” the release of a massive overhang of digital work that humans simply did not have enough thinking cycles to process.

    On whether this means fewer software engineering jobs, Karpathy is cautiously optimistic. He invokes the Jevons Paradox: when something becomes cheaper, demand often increases so much that total consumption goes up. The canonical example is ATMs and bank tellers. ATMs were supposed to replace tellers, but they made bank branches cheaper to operate, leading to more branches and more tellers (at least until 2010). Similarly, if AI makes software dramatically cheaper, the demand for software could explode because it was previously constrained by scarcity and cost.

    He emphasizes that the physical world will lag behind significantly. Robotics requires enormous capital, conviction, and time. Most self-driving startups from a decade ago failed. The interesting opportunities in the near term are at the interface between digital and physical: sensors feeding data to AI systems, actuators executing AI decisions in the real world, and new markets for information (he imagines prediction markets where agents pay for real-time photos from conflict zones).

    Education in the Age of Agents

    Karpathy’s MicroGPT project distills the entire LLM training process into 200 lines of Python. He started making an explanatory video but stopped, realizing the format is obsolete. If the code is already that simple, anyone can ask an agent to explain it in whatever way they need: different languages, different skill levels, infinite patience, multiple approaches. The teacher’s job is no longer to explain. It is to create the thing that is worth explaining, and then let agents handle the last mile of education.

    He envisions a future where education shifts from “guides and lectures for humans” to “skills and curricula for agents.” A skill is a set of instructions that tells an agent how to teach something, what progression to follow, what to emphasize. The human educator becomes a curriculum designer for AI tutors. Documentation shifts from HTML for humans to markdown for agents.

    His punchline: “The things that agents can do, they can probably do better than you, or very soon. The things that agents cannot do is your job now.” For MicroGPT, the 200-line distillation is his unique contribution. Everything else, the explanation, the teaching, the Q&A, is better handled by agents.

    Why Not Return to a Frontier Lab?

    The conversation closes with a nuanced discussion about why Karpathy remains independent. He identifies several tensions. First, financial alignment: employees at frontier labs have enormous financial incentives tied to the success of transformative (and potentially disruptive) technology. This creates a conflict of interest when it comes to honest public discourse. Second, social pressure: even without arm-twisting, there are things you cannot say and things the organization wants you to say. You cannot be a fully free agent. Third, impact: he believes his most impactful contributions may come from an “ecosystem level” role rather than being one of many researchers inside a lab.

    However, he acknowledges a real cost. Being outside frontier labs means his judgment will inevitably drift. These systems are opaque, and understanding how they actually work under the hood requires being inside. He floats the idea of periodic stints at frontier labs, going back and forth between inside and outside roles to maintain both independence and technical grounding.

    Thoughts

    This is one of the most honest and technically grounded conversations about the current state of AI I have heard in 2026. A few things stand out.

    The AutoResearch concept is genuinely important. Not because autonomous hyperparameter tuning is new, but because Karpathy is framing the entire problem correctly: the goal is not to build better tools for researchers. It is to remove researchers from the loop entirely. The fact that an overnight run found optimizations that a world-class researcher missed after years of manual tuning is a powerful data point. And the distributed computing vision (AutoResearch at Home) could be the most consequential idea in the entire conversation if someone builds it well.

    The “death of apps” framing deserves more attention. Karpathy’s Dobby example is not a toy demo. It is a preview of how every consumer software company’s business model gets disrupted. If agents can reverse-engineer APIs and unify disparate systems through natural language, the entire app ecosystem becomes a commodity layer beneath an intelligence layer. The companies that survive will be the ones that embrace API-first design and accept that their “user” is increasingly an LLM.

    The jaggedness observation is underappreciated. The fact that models can autonomously improve training code but cannot tell a new joke should be deeply uncomfortable for anyone claiming we are on a smooth path to AGI. It suggests that current scaling and RL approaches produce narrow excellence, not general intelligence. The joke example is funny, but the underlying point is serious: we are building systems with alien capability profiles that do not match any human intuition about what “smart” means.

    Finally, Karpathy’s decision to stay independent is itself an important signal. When one of the most capable AI researchers in the world says he feels “more aligned with humanity” outside of frontier labs, that should be taken seriously. His point about financial incentives and social pressure creating misalignment is not abstract. It is structural. And his proposed solution of rotating between inside and outside roles is pragmatic and worth consideration for the entire field.

  • Jensen Huang on Nvidia’s Future: Physical AI, the Inference Explosion, Agentic Computing, and Why AI Doomers Are Wrong

    Jensen Huang sat down with the All-In Podcast crew at GTC 2026 for one of the most wide-ranging and candid conversations he’s had in years. From the Groq acquisition to $50 trillion physical AI markets, from defending Nvidia’s pricing to gently calling out Anthropic’s communications missteps, Huang covered everything. Here’s a complete breakdown of everything said — and what it means.


    ⚡ TL;DW

    • Nvidia has evolved from a GPU company into a full-stack AI factory company, and its TAM has expanded by 33–50% just from new rack configurations.
    • Inference demand is exploding — Huang says compute will scale 1 million times, and analysts who model 7–20% growth “don’t understand the scale and breadth of AI.”
    • The Groq acquisition positions Nvidia to run the right workload on the right chip — GPU, LPU, CPU, switch, all orchestrated under Dynamo, the AI factory OS.
    • Physical AI (robotics, autonomous vehicles, industrial automation) is Nvidia’s play at a $50 trillion market — and it’s already a ~$10 billion/year business growing exponentially.
    • OpenClaw (Claude’s open-source agentic framework) is, in Jensen’s view, the new operating system for modern computing.
    • Jensen pushed back hard on AI doomerism — and diplomatically but clearly called out Anthropic’s communications as too extreme.
    • Robots are 3–5 years away from being “all over the place.” Jensen hopes for more than one robot per human on Earth.
    • Dario Amodei’s $1 trillion AI revenue forecast by 2030? Jensen says he’s being too conservative.
    • His advice to young people: become deeply expert at using AI. English majors may end up winning.

    🔑 Key Takeaways

    1. Nvidia Is No Longer a Chip Company

    Jensen Huang made clear that Nvidia’s identity has fundamentally shifted. The company is now an AI factory company — building not just GPUs but the entire computing stack: GPUs, CPUs, networking switches, storage processors (BlueField), and now LPUs via the Groq acquisition. The operating system tying it all together is called Dynamo, named after the Siemens machine that powered the last industrial revolution by turning water into electricity. Huang’s point: Dynamo is doing the same thing for AI — turning raw compute into intelligence at industrial scale.

    2. The Inference Explosion Is Real and Massive

    A year ago, Huang predicted inference would scale enormously. He’s now doubling down: from generative AI to reasoning models, compute requirements grew roughly 100x. From reasoning to agentic AI, another 100x. That’s 10,000x in two years — and Huang says we haven’t even started scaling yet. He believes the ultimate trajectory is 1 million times more compute than where we started. Analysts who project 20–30% revenue growth for Nvidia fundamentally don’t understand what’s coming.

    3. Disaggregated Inference Is the New Architecture

    The technical centerpiece of GTC 2026 was disaggregated inference — the idea that the AI processing pipeline is so complex (prefill, decode, working memory, long-term memory, tool use, multi-agent coordination) that it should run across heterogeneous chips, not just a single GPU rack. Nvidia’s Vera Rubin system is built for this: multiple rack types handling different workloads. Jensen says Nvidia’s TAM grew by 33–50% just from adding those four new rack types to what was previously a one-rack company.

    4. The $50 Billion Factory Produces the Cheapest Tokens

    Critics argue that Nvidia’s inference factories cost $40–50B versus competitors at $25–30B. Huang’s rebuttal is clean: don’t equate the price of the factory with the cost of the tokens. A $50B Nvidia factory producing 10x the throughput of a $30B alternative means Nvidia’s tokens are actually cheaper. When land, power, shell, storage, networking, and cooling are already fixed costs, the delta between GPU options is a small fraction of total spend — but the performance difference is enormous.

    5. OpenClaw Is the New OS for Modern Computing

    Jensen spent serious time on Claude’s open-source agentic framework (referred to throughout as “OpenClaw”). His view: it’s not just a product announcement — it’s a computing paradigm shift. OpenClaw has a memory system (short-term scratch, long-term file system), skills/tools, resource management, scheduling, cron jobs, multi-agent spawning, and external I/O. These are the four foundational elements of an operating system. His conclusion: for the first time, we have a personal AI computer — and it’s open source, running everywhere.

    6. Agents Mean Every Engineer Gets 100 Helpers

    Jensen’s internal benchmark at Nvidia: if a $500K/year engineer isn’t spending at least $250K worth of tokens annually, something is wrong. He compared it to a chip designer refusing to use CAD tools and working only in pencil. His vision: every engineer will have 100 agents working alongside them. The nature of programming shifts from writing code to writing ideas, architectures, specifications, and evaluation criteria — and then guiding agents toward outcomes.

    7. Physical AI Is a $50 Trillion Opportunity

    This is the biggest framing in the talk. Physical AI — robotics, autonomous vehicles, industrial automation, agriculture, healthcare instruments — represents the technology industry’s first real shot at a $50 trillion market that has been “largely void of technology until now.” Nvidia started this journey 10 years ago, it’s now inflecting, and it’s already approaching $10 billion/year as a standalone business. Huang expects this to grow exponentially.

    8. Robots Are 3–5 Years Away from Ubiquity

    Huang was asked about the “lost decade” of robotics — Google buying and selling Boston Dynamics, years of underwhelming progress. His take: America got into robotics too soon, got exhausted, and quit about five years before the enabling technology (AI “brains”) appeared. Now the brain is here. From a “high-functioning existence proof” (what we have now) to “reasonable products,” technology historically takes 2–3 cycles — meaning 3 to 5 years. He also flagged China’s formidable position in robotics hardware: motors, rare earth elements, magnets, micro-electronics. The world’s robotics industry will depend heavily on China’s supply chain.

    9. Jensen Thinks Dario Amodei Is Too Conservative

    Dario Amodei publicly predicted that AI model and agent companies will generate hundreds of billions in revenue by 2027–28 and reach $1 trillion by 2030. Jensen’s response: “I think he’s being very conservative. Way better than that.” His reasoning? Dario hasn’t fully accounted for the fact that every enterprise software company will become a reseller of AI tokens — a logarithmic expansion of go-to-market that will dwarf what any AI lab can sell directly.

    10. The AI Moat Is Deep Specialization

    When asked what the real competitive moat is at the application layer, Jensen said: deep specialization. General models will handle general intelligence. But every industry has domain expertise that needs to be captured in specialized sub-agents, trained on proprietary data. The entrepreneur who knows their vertical better than anyone else, connects their agent to customers first, and builds that flywheel — that’s the moat. He framed it as an inversion of traditional software: instead of building horizontal platforms and customizing at the edges, AI enables you to go vertical-first from day one.

    11. Jensen’s Gentle but Clear Critique of Anthropic’s Communications

    Asked what advice he’d give Anthropic following the Department of Defense controversy that created a PR crisis, Jensen praised Anthropic’s technology and their focus on safety — then offered a measured but pointed critique: warning people is good, scaring people is less good. He argued that AI leaders need to be more circumspect, more humble, more moderate. Making extreme, catastrophic predictions without evidence can damage public trust in a technology that is “too important.” His implicit warning: look what happened to nuclear energy. A 17% public approval rating for AI is the beginning of that same problem.

    12. China Policy: Back to Market, With Conditions

    Nvidia had a 95% market share in China — and lost it entirely due to export controls, falling to 0%. Jensen confirmed that Nvidia has received approved licenses from Secretary Lutnik to sell back into China, has received purchase orders from Chinese companies, and is actively ramping up its supply chain to ship. His broader point: the risk isn’t selling chips to China — the real risk is America becoming so afraid of AI that its own industries don’t adopt it while the rest of the world surges ahead.

    13. Taiwan, Supply Chain, and Geopolitical Risk

    Jensen laid out a three-part strategy for de-risking around Taiwan: (1) Re-industrialize the US as fast as possible — he said Arizona, Texas, and California manufacturing is accelerating with Taiwan’s help as a strategic partner. (2) Diversify the supply chain to South Korea, Japan, and Europe. (3) Demonstrate restraint — don’t press unnecessarily while building resilience. He also noted that Taiwan’s partnership has been genuine and deserves recognition and generosity in return.

    14. Data Centers in Space

    Not science fiction — Nvidia already has CUDA running in satellites doing AI imaging processing in orbit. The near-term thesis: it’s more efficient to process satellite imagery in space than beam raw data back to Earth. The longer-term architecture for space-based data centers is being explored, with radiation hardening already solved. The main challenge is cooling — in the vacuum of space, you can only use radiation cooling, which requires very large surface areas.

    15. Healthcare: Near the ChatGPT Moment for Digital Biology

    Jensen believes digital biology is approaching its own ChatGPT inflection point — the moment where representing genes, proteins, cells, and chemicals becomes as natural as language modeling. He flagged companies like Open Evidence and Hippocratic AI as examples of where agentic healthcare is already working. His vision: every hospital instrument — CT scanners, ultrasound devices, surgical robots — will become agentic, with “OpenClaw in a safe version” running inside each one.

    16. Open Source and Closed Source Will Both Win

    Jensen pushed back on the idea that open source vs. proprietary is an either/or question. It’s both, necessarily. Proprietary models (OpenAI, Anthropic, Gemini) will continue to serve the general horizontal layer — and consumers love having options with distinct personalities. But industries need open models they can specialize, fine-tune, and control. The open model ecosystem, including Chinese models, is “near the frontier” and growing fast. His framework: connect to the best available model today via a router, and use that time to cost-reduce and fine-tune your specialized version.

    17. Advice for Young People: Master AI, Go Deep on Science

    Jensen’s advice for students deciding what to study: deep science, deep math, and strong language skills — because language is the programming language of AI. He made a striking claim: the English major might end up being the most successful professional in the AI era. His one non-negotiable: whatever you study, become deeply expert at using AI tools. And he used radiologists as proof that AI doesn’t destroy jobs — when AI did 100% of the computer vision work in radiology, demand for radiologists went up, not down, because the total number of scans possible exploded.


    📋 Detailed Summary

    The Groq Acquisition and Disaggregated Inference

    The conversation opened with the Groq acquisition — a deal Chamath jokingly said made him “insufferable” during the six-week close. Jensen explained the strategic logic: as Nvidia evolved from running large language models to running full agentic systems, the compute problem became radically more complex. Agentic workloads involve working memory, long-term memory, tool use, inter-agent communication, and diverse model types (autoregressive, diffusion, large, small). No single chip type handles all of this optimally.

    The solution is disaggregated inference — routing different parts of the processing pipeline to the most efficient hardware. Groq’s LPU chips are particularly suited to certain inference tasks. Nvidia’s Vera Rubin system now encompasses five rack types where it used to be one: GPU compute, networking processors, storage processors (BlueField), CPUs, and now LPUs. Jensen’s TAM math: the addition of those four rack types grew Nvidia’s addressable market in any given data center by 33–50% overnight.

    The operating system managing all of this is Dynamo, which Jensen introduced 2.5 years ago — a deliberate reference to the Siemens dynamo machine that powered the first industrial revolution. Dynamo orchestrates workloads across this heterogeneous compute landscape, optimizing for cost, speed, and efficiency.

    Decision-Making at the World’s Most Valuable Company

    Asked how he allocates attention and makes strategic calls at a $350B+ revenue company, Jensen gave a surprisingly simple framework: pursue things that are insanely hard, that have never been done before, and that tap into Nvidia’s specific superpowers. If something is easy, competitors will flood in. If it’s hard and unique, the pain and suffering of building it becomes a moat in itself. He explicitly said he enjoys the pain — and that there’s no great invention that came easily on the first try.

    Physical AI and the Three Computers

    Jensen framed Nvidia’s physical AI strategy around three distinct computers:

    1. The Training Computer — for developing and creating AI models.
    2. The Simulation Computer (Omniverse) — for evaluating AI systems inside physics-accurate virtual environments (required for robotics and autonomous vehicles that can’t be tested purely in the real world).
    3. The Edge Computer — deployed in cars, robots, factory floors, teddy bears, and telecom base stations. Jensen flagged that the $2 trillion global telecom industry is being transformed into an extension of AI infrastructure — turning radio base stations into AI edge devices.

    Physical AI is, by Jensen’s estimate, the technology industry’s first real crack at the $50 trillion industrial economy. He started the investment 10 years ago. It’s now approaching $10 billion annually and growing exponentially.

    OpenClaw as the New Operating System

    Jensen’s analysis of OpenClaw (Anthropic’s open-source agentic framework, referred to as “Claude Code” / “Open Claude” throughout) was one of the most intellectually interesting sections of the interview. He traced three cultural inflection points:

    1. ChatGPT — put generative AI into the popular consciousness by wrapping the technology in a usable interface.
    2. Reasoning models (o1, o3) — shifted AI from answering questions to answering them with grounded, verifiable reasoning, driving economic model inflection at OpenAI.
    3. OpenClaw — introduced the concept of agentic computing to the general population. But more importantly, it defined a new computing architecture: memory (short and long-term), skills, resource scheduling, IO, external communication, and agent spawning. These are the four elements of an operating system. OpenClaw is, in Jensen’s view, the blueprint for what a personal AI computer looks like — open source, running everywhere.

    He also flagged that Nvidia contributed security governance work to OpenClaw alongside Peter Steinberger — ensuring agents with access to sensitive information, code execution, and external communication can be properly governed with appropriate policy constraints.

    The Agentic Future and Token Economics

    Jensen’s internal benchmark for token spending at Nvidia was striking: a $500K/year engineer who isn’t spending $250K/year in tokens is underperforming. He framed this as no different from a chip designer refusing to use CAD software. The implication for enterprise economics is profound: the cost basis of AI in a company isn’t an IT line item — it’s a multiplier on every knowledge worker’s output.

    He also addressed Andrej Karpathy’s “autoresearch” concept — the idea of AI systems that autonomously run research experiments. A guest described completing, in 30 minutes on a desktop, a genomics analysis that would normally constitute a seven-year PhD thesis. Jensen’s response: this isn’t a fluke. It’s the beginning of a fundamental shift in what “doing science” means.

    His forecast on compute scaling: generative to reasoning = 100x. Reasoning to agentic = 100x. Total in two years = 10,000x. And the end state isn’t even close yet — he believes the long-run trajectory is 1 million times current compute levels.

    AI’s PR Crisis and Anthropic’s Comms Mistakes

    This segment was diplomatically delivered but substantively sharp. Jensen opened by genuinely praising Anthropic — their technology, their safety focus, their culture of excellence. Then he drew a distinction: warning people about AI capabilities is good and important. Scaring people with extreme, catastrophic predictions for which there’s no evidence is less good, and potentially very damaging.

    He pointed to the nuclear analogy: public fear of nuclear energy, driven partly by technology leaders’ own alarming statements, effectively killed the US nuclear industry. America now has zero new fission reactors while China builds a hundred. AI’s 17% public approval rating in the US is the beginning of the same dynamic. Jensen said the greatest national security risk from AI isn’t what other countries do with it — it’s the US being so afraid of it that American industries fail to adopt it while the rest of the world surges ahead.

    His prescription for AI leaders: be more circumspect, more humble, more moderate. Acknowledge that we can’t completely predict the future. Avoid statements that are extreme and unsupported by evidence. Our words matter in a way they didn’t used to — technology leaders are now central to the national security and economic policy conversation.

    China Policy: Return to Market

    One of the more concrete news items in the interview: Nvidia is returning to the Chinese market. Jensen confirmed they had a 95% market share in China — and fell to 0% due to export controls. They’ve now received approved licenses from Secretary Lutnik, Chinese companies have issued purchase orders, and Nvidia is ramping its supply chain to ship.

    His framework for the right AI export policy outcome: the American tech stack — from chips to computing systems to platforms — should be used by 90% of the world as the foundation on which other countries build their own AI. The alternative — an AI industry that ends up like solar panels, rare earth minerals, motors, and telecom infrastructure (all dominated by China) — is a national security catastrophe.

    Self-Driving and Competitive Positioning

    Jensen laid out Nvidia’s strategy in autonomous vehicles: they don’t want to build self-driving cars — they want to enable every car company to build them. Nvidia supplies all three computers: training, simulation, and the in-car edge computer. Their autonomous driving AI system, called “Al Pomayo,” introduced reasoning capabilities into autonomous vehicles — decomposing complex scenarios into simpler ones the system knows how to navigate.

    On competition from customers (Google TPU, Amazon Inferentia, etc.): Jensen isn’t worried. His argument is that 40% of Nvidia’s business comes from customers who don’t just want chips — they need the full AI factory stack. CUDA isn’t just a chip instruction set; it’s a system. Companies that have tried to build their own silicon have found that chips without the full stack don’t solve the problem. Meanwhile, Nvidia is gaining market share, including pulling in Anthropic and Meta as Nvidia customers, and AWS just announced a million-chip order.

    Robotics: 3–5 Years to Everywhere

    Jensen’s robotics take was both bullish and grounded. America invented modern robotics, got too early, got exhausted, and quit just before the AI brain appeared that would make it work. That brain is here now. From the current “existence proof” stage to “reasonable products,” he sees 3–5 years. His aspiration: more than one robot per human on Earth. The use cases he described range from factory floor automation to virtual presence (using your home robot as an avatar while traveling), to lunar and Martian factories run entirely by robots with materials beamed back to Earth at near-zero energy cost.

    China’s position in robotics is formidable and can’t be wished away: they lead in micro-electronics, motors, rare earth elements, and magnets — all foundational to building robot hardware. The world’s robotics industry, including the US, will depend heavily on China’s supply chain for hardware components even if American software and AI lead.

    Revenue Forecasts: Dario Is Too Conservative

    When the hosts described Dario Amodei’s forecast of hundreds of billions in AI model/agent revenue by 2027–28 and $1 trillion by 2030, Jensen said simply: “Way better than that.” His reason: Dario hasn’t fully factored in that every enterprise software company will become a value-added reseller of AI tokens — OpenAI’s, Anthropic’s, whoever’s. The go-to-market expansion that comes from every SAP, Salesforce, and ServiceNow reselling AI is logarithmic, not linear.

    Healthcare: Near the Inflection Point

    Jensen named three layers of Nvidia’s healthcare involvement: (1) AI biology/physics — using AI to represent and predict biological behavior for drug discovery; (2) AI agents — agentic systems for diagnosis assistance, first-visit intake, and clinical decision support (he named Open Evidence and Hippocratic AI as leading examples); (3) Physical AI for healthcare — robotic surgery, AI-enabled instruments, and the vision of every hospital device (CT, ultrasound, surgical tools) becoming agentic. He sees digital biology as approaching its ChatGPT moment — the point where representing genes, proteins, and cells computationally becomes as natural and powerful as language modeling.

    Career Advice: Go Deep, Use AI

    Jensen closed with career guidance. His core advice: study deep science, deep math, and language — because language is now the programming language of AI. He made the counterintuitive claim that English majors may end up being the most successful professionals in the AI era because the ability to specify, guide, and evaluate AI outputs is an artform — and it’s not trivial. The person who knows how to give AI enough guidance without over-prescribing, who can recognize a great AI output from a mediocre one, and who can orchestrate teams of agents toward outcomes — that’s the most valuable skill.

    He used the radiologist story as his closing proof point: when computer vision was integrated into radiology, demand for radiologists went up, not down. The number of scans exploded, hospitals made more money, and more patients got diagnosed faster. AI didn’t replace radiologists — it made them bionic and made the whole system bigger. He expects the same pattern everywhere: every job will be transformed, some tasks will be eliminated, but the total pie grows dramatically.


    💭 Thoughts

    Jensen Huang is doing something rare among tech CEOs: he’s genuinely trying to build the mental model people need to understand what’s happening — not just sell products. The disaggregated inference argument, the three-computer framework, the OS analogy for OpenClaw, the token economics benchmark — these aren’t talking points. They’re conceptual tools for thinking clearly about a landscape most people are still squinting at.

    The most underappreciated part of the interview is the AI PR section. Jensen is essentially sounding an alarm without panicking: if America’s technology leaders keep scaring the public with AI doomerism, we will repeat the nuclear mistake. We’ll regulate ourselves into irrelevance while China builds the infrastructure we refused to build. The 17% approval number he cited should frighten every AI optimist in the room. Fear of a technology, once embedded culturally, is very hard to dislodge.

    The Anthropic critique was surgical. He didn’t name the specific controversy, didn’t pile on, and praised their technology extensively. But the message was clear: extreme safety warnings, even well-intentioned ones, carry real costs in the public square. That’s a genuinely hard tension for safety-focused AI companies, and there’s no clean answer — but Huang’s instinct that humility and circumspection serve better than catastrophism seems directionally correct.

    The physical AI thesis deserves more attention than it gets. Everyone is focused on the software intelligence race — OpenAI vs. Anthropic vs. Gemini. But Jensen is pointing at a $50 trillion industrial economy that AI has barely touched. Robotics, autonomous vehicles, agricultural automation, smart hospital instruments — this is where the real mass of economic value is locked. And Nvidia’s ten-year head start on the enabling infrastructure for physical AI may turn out to be more durable than any software moat.

    Finally: the robot optimism is infectious and probably correct. The world is genuinely short millions of workers. The enabling technology — AI brains good enough to drive perception, reasoning, and action in unstructured physical environments — just arrived. The hardware supply chain is largely intact. And the economic incentive to automate is stronger than it’s ever been. Three to five years feels aggressive. But so did “ChatGPT will change everything” in 2021.