PJFP.com

Pursuit of Joy, Fulfillment, and Purpose

Tag: scaling laws

  • Jensen Huang at Stanford CS153 Frontier Systems on Co-Design, Agentic Computing, Vera Rubin, Open Models, and the Million-X Decade That Reshaped AI Infrastructure

    https://www.youtube.com/watch?v=tsQB0n0YV3k

    NVIDIA CEO Jensen Huang returned to Stanford for the CS153 Frontier Systems class (the room nicknamed itself “AI Coachella”) to lay out, in raw form, how he thinks about the computer being reinvented for the first time in over sixty years. Across roughly seventy minutes of student questions he walks through the codesign philosophy that gave NVIDIA a million-x decade, the architectural through-line from Hopper to Grace Blackwell to Vera Rubin to Feynman, the case for open source foundation models, the realities of tokens per watt and MFU, energy demand running a thousand times higher, the China and export-control debate, and his own biggest strategic mistakes. Watch the full conversation on YouTube.

    TLDW

    Huang argues every layer of computing has changed: the programming model, the system architecture, the deployment pattern, the economics. Co-design across CPUs, GPUs, networking, storage, switches and compilers gave NVIDIA roughly a million-x speed-up over ten years versus the ten-x Moore’s Law era, and that headroom is what let researchers say “just train on the whole internet.” Hopper was built for pre-training, Grace Blackwell NVLink72 for inference and reasoning (50x over Hopper in two years), Vera Rubin is built for agents that load long memory, call tools and need a low-latency single-threaded CPU bolted directly to the GPU, and Feynman extends that to swarms of agents that spawn sub-agents. Open weights matter because safety, sovereignty (230-plus languages no one else will fund) and domain models for biology, autonomy, robotics and climate need a foundation that NVIDIA is willing to seed. Compute is not really the scarce resource (Huang says place the order and the chips ship), the broken thing is institutional budgeting that can’t put a billion dollars into a shared university supercomputer. Energy demand is heading a thousand times higher and this is finally the moment market forces alone will fund sustainable generation. On geopolitics he rejects the GPUs-as-atomic-bombs framing and warns America will end up like its telecom industry if it cedes two thirds of the world. On career he advises seeking suffering on purpose. On strategy he says observe, reason from first principles, build a mental model, work backwards, minimize opportunity cost, maximize optionality.

    Key Takeaways

    • The computing model has been substantially unchanged since the IBM System 360, sixty-plus years ago. Huang’s first computer architecture book was the System 360 manual. AI is the first true reinvention.
    • Old computing was pre-recorded retrieval. New computing is generated, contextually aware and continuous. Cloud was on-demand. Agentic systems run continuously.
    • Codesign is NVIDIA’s central thesis. Inherited from the Hennessy and Patterson RISC era at Stanford, extended across CPUs, GPUs, networking, switches, storage, compilers and frameworks all optimized together.
    • The result of full-stack codesign: roughly 1,000,000x faster compute over ten years, versus a generous 10x to 100x for Moore’s Law in the same period. Dennard scaling effectively ended a decade ago.
    • That million-x speed-up is what unlocked “train on all of the internet” as a realistic AI strategy.
    • After GPT, Huang says it was obvious thinking was next. Reasoning is just generating tokens consumed internally, then using tools is generating tokens consumed externally. Agentic systems followed predictably.
    • Education needs AI baked into the curriculum, not just taught as a subject. Pre-recorded textbooks cannot keep pace with knowledge being generated in real time.
    • Huang says he cannot learn anymore without AI. He has the AI read the paper, then read every related paper, then become a dedicated researcher he can interrogate.
    • Mead and Conway and the first-principles methodology of semiconductor design are still worth learning even though most of the scaling tricks have been exhausted.
    • NVIDIA itself is one of the largest consumers of Anthropic and OpenAI tokens in the world. One hundred percent of NVIDIA engineers are now agentically supported. Huang recommends Claude and similar tools by name and says open-source downloads will not match the integrated product harness.
    • NVIDIA still invests heavily in open foundation models because language and intelligence represent the codification of human knowledge. Five pillars: Nemotron (language), BioNeMo (biology), Alphamayo (autonomous vehicles), Groot (humanoid robotics) and a climate science model (mesoscale multiphysics).
    • Sovereign language models matter. Roughly 230 world languages will never be a top priority for a commercial frontier lab. Nemotron is near-frontier and fully fine-tunable so any country can adapt it.
    • Safety and security require open weights. You cannot defend against or audit a black box. Transparent systems let researchers interrogate models and let defenders deploy swarms.
    • The future of cyber defense is not bigger-model-versus-bigger-model. It is trillions of cheap fast small models like Nemotron Nano surrounding the threat.
    • Domain models fuse language priors with world models. Alphamayo learned to drive safely on a few million miles instead of billions because it can reason like a human about the road.
    • MFU (Model Flops Utilization) is a misleading metric. Huang says he wants low MFU, because that means he over-provisioned every resource and never gets pinned by Amdahl’s law during a spike.
    • The xAI Memphis cluster running at 11 percent MFU is not necessarily a failure mode. In disaggregated prefill plus decode inference you can deliver very high tokens per watt with very low MFU.
    • The right metric is performance, ultimately tokens per watt as a proxy for intelligence per watt, and even that needs adjustment because not all tokens are equal. Coding tokens are worth more than other tokens.
    • Hopper was designed for pre-training. NVIDIA chose to build multi-billion-dollar systems when the largest existing scientific supercomputer cost $350 million, with no proven customer base. It worked.
    • Grace Blackwell NVLink72 was designed for inference, especially the high-memory-bandwidth decode phase. It is the world’s first rack-scale computer and delivered a 50x speed-up over Hopper in two years, against an expected 2x from Moore’s Law.
    • Vera Rubin is designed for agents. Long-term memory wired into storage and into the GPU fabric, working memory, heavy tool use, and Vera, a CPU optimized for low-latency multi-core single-threaded code so a multi-billion-dollar GPU system does not stall waiting on a slow tool call.
    • Feynman is being shaped for swarms of agents with sub-agents and sub-sub-agents, a recursive software topology that demands a new compute pattern.
    • Tokens per watt improved 50x in one generation. Compounding energy efficiency is the lever NVIDIA controls directly.
    • Total compute energy demand is heading roughly a thousand times higher than today, possibly two orders of magnitude beyond that. Huang says he would not be surprised if the estimate is low.
    • For the first time in history, market forces alone are enough to fund solar, nuclear and grid upgrades. Government subsidies are no longer required to make sustainable energy investment rational.
    • Copper interconnect is becoming a bottleneck. Photonics is moving from optional to structural inside racks and across them.
    • Comparing NVIDIA GPUs to atomic bombs, Huang says, is a stupid analogy. A billion people use NVIDIA GPUs. He advocates them to his family. He does not advocate atomic bombs to anyone.
    • If the United States cedes two thirds of the global market to competitors on policy grounds, the American technology industry will end up like American telecommunications, which was policied out of existence.
    • Huang directly rejects AI doom-by-singularity narratives. It is not true that we have no idea how these systems work. It is not true that the technology becomes infinitely powerful in a nanosecond. He calls the rhetoric irresponsible and harmful to the field students are about to enter.
    • On Stanford specifically: if the university president places an order, NVIDIA will deliver the chips. The bottleneck is that no university department has a billion-dollar compute budget because budgeting is fragmented across grants. Stanford’s $40 billion endowment is more than enough to fix that.
    • “It’s Stanford’s fault” is meant as empowerment. If something is your fault, you can solve it.
    • Career advice: do not optimize purely for passion. Most people do not yet know what they love. Pick the job in front of you and do it as well as possible. Even as CEO, Huang says, 90 percent of the work is hard and he suffers through it.
    • Suffering on purpose builds the muscle of resilience. When the company, the team or the family needs you to be tough, that muscle has to already exist.
    • NVIDIA’s first generation of products was technically wrong in nearly every dimension: curved surfaces instead of triangles, no Z-buffer, forward instead of inverse texture mapping, no floating point. The strategic recovery, not the technology, taught Huang the lessons that have lasted decades.
    • The biggest clean strategic mistake Huang names is the move into mobile chips (Tegra). It grew to a billion dollars then went to zero when Qualcomm’s modem dominance shut NVIDIA out of the 3G to 4G transition. The recovery into automotive and robotics (the Thor chip is the great great great grandson of that mobile lineage) was real, but Huang refuses to rationalize the original choice.
    • Forecasting framework: observe, reason from first principles, ask “so what” and “what next” until you have a mental model of the future, place your company inside that model, then work backwards while minimizing opportunity cost and maximizing optionality.
    • Best part of the CEO job: living at the intersection of vision, strategy and execution surrounded by people capable enough to make ambitious visions real. Worst part: the responsibility for everyone who joined the spaceship, especially in the near-death moments NVIDIA had four or five times early on.
    • Underrated insider note: Huang’s first apple pie with cheese, first hot fudge sandwich and first milkshake all happened at Denny’s. The Superbird, the fried chicken and a custom Superbird-style ham and cheese with tomato and mustard are his order.

    Detailed Summary

    Computing reinvented from the ground up

    Huang frames the moment as the first true rewrite of the computer in sixty-plus years. From the IBM System 360 forward, the mental model of writing code, running code, taking a computer to market and reasoning about applications stayed roughly constant. AI changes the programming model itself. Software is no longer a compiled binary running deterministically on a CPU. It is a neural network running on a GPU producing generated, contextual, real-time output. That cascades into how companies are organized, what tools developers use, what the network and storage stack look like, and what an application is even allowed to do. Robo-taxis, he notes, are an application no one would have attempted before deep learning unlocked perception.

    Codesign and the million-x decade

    Codesign is the philosophical center of the talk. Huang traces it to the RISC work of John Hennessy at Stanford, where simpler instruction sets won by being co-designed with the compiler rather than maximally optimized in isolation. NVIDIA extends the principle across every layer simultaneously: GPU architecture, CPU architecture, NVLink and NVSwitch fabrics, photonic interconnects, networking silicon, storage paths, CUDA libraries, frameworks and ultimately the model design. The numbers Huang gives are arresting. Moore’s Law in its prime delivered roughly 100x per decade. By the time Dennard scaling broke, real-world gains had compressed to roughly 10x. NVIDIA’s codesigned stack delivered between 100,000x and 1,000,000x over the same ten-year window. That non-linear speed-up is, in Huang’s telling, the precondition for modern AI: it is what allowed researchers to stop curating training sets and just feed the entire internet to the model.

    Education has to fuse first principles with AI tools

    Asked how curriculum should evolve, Huang argues AI must be integrated into the learning process, not just taught about. He recalls Hennessy writing his textbook by hand a chapter a week while Huang was a student, and says pre-recorded textbooks cannot keep up with the rate at which AI generates new knowledge. He describes his own learning workflow: hand the paper to an AI, then have it read the entire surrounding literature, then treat the AI as a dedicated researcher who can be interrogated. At the same time he defends the classics. Mead and Conway are still the foundation. Most modern semiconductor scaling tricks have been exhausted, but knowing where the field came from sharpens judgment when designing what comes next.

    Open source and the five domain pillars

    Huang gives one of the most detailed public accounts of why NVIDIA invests so heavily in open foundation models even while being a top customer of closed labs. He recommends Claude and OpenAI by name for production coding work, and says 100 percent of NVIDIA engineers are now agentically supported. The open-weights case rests on three legs. First, language is the codification of intelligence, and there are at least 230 languages that no commercial lab will ever prioritize. Nemotron is built near frontier and released so any country or community can fine-tune it. Second, the same representation-learning approach has to be replicated in domains where the data is not internet text, so NVIDIA seeded BioNeMo for biology, Alphamayo for autonomy, Groot for humanoid robotics and a climate model for mesoscale multiphysics. The economics of those fields would never produce a foundation model on their own. Third, safety and security require transparency. A black box cannot be defended or audited, and the future of cyber defense is not bigger-model-versus-bigger-model but swarms of cheap fast small models like Nemotron Nano surrounding the threat.

    MFU is the wrong metric, tokens per watt is closer

    A student raises the leaked memo that the xAI Memphis cluster is running at 11 percent Model Flops Utilization. Huang flips the framing. He says he would rather be at low MFU all the time, because that means he over-provisioned flops, memory bandwidth, memory capacity and network capacity. Bottlenecks shift constantly, so over-provisioning across every dimension is what lets the system absorb a spike without getting pinned by Amdahl’s law. In disaggregated inference, where prefill and decode are physically separated and decode is bandwidth-bound rather than flop-bound, NVLink72 can deliver extremely high tokens per watt while reporting very low MFU. Huang argues the right framing is performance, and ultimately tokens per watt as a rough proxy for intelligence per watt, adjusted for the fact that not all tokens are equal. A coding token is worth more than a generic token.

    Hopper, Grace Blackwell NVLink72, Vera Rubin, Feynman

    Huang gives the clearest public framing of NVIDIA’s roadmap as a sequence of architectural answers to evolving compute patterns. Hopper was built for pre-training, at a moment when NVIDIA chose to build multi-billion-dollar machines while the largest scientific supercomputer in the world cost $350 million and the marketplace for such systems was, on paper, zero. Grace Blackwell NVLink72 was the answer to inference and reasoning: a rack-scale computer that ganged 72 GPUs together because decode needs aggregate memory bandwidth far beyond a single chip. The generation-over-generation speed-up was 50x in two years, twenty-five times what Moore’s Law would have delivered. Vera Rubin is being built explicitly for agents. Agents load long-term memory from storage that has to be wired directly into the GPU fabric, they use working memory, they call tools that run on a CPU, and they wait. So the CPU has to be Vera, optimized for low-latency single-threaded code, because the multi-billion-dollar GPU system cannot afford to idle waiting on a slow tool call. Feynman extends the pattern to swarms of agents with sub-agents and sub-sub-agents, a recursive software topology that will demand its own compute pattern.

    Energy demand and the grid

    Huang’s energy projection is one of the most aggressive numbers in the talk. NVIDIA can compound tokens per watt by 50x per generation through codesign, but the total compute demand is heading roughly a thousand times higher, and Huang says he would not be surprised if the real figure is one or two orders of magnitude beyond that. The reason is structural: future computing is generative and continuous, not pre-recorded and on-demand. The good news, he argues, is that this is the best moment in the history of humanity to invest in sustainable generation. Market forces alone are now sufficient to fund solar, nuclear and grid upgrades. Government subsidies are no longer required to make the math work.

    Adversarial countries, export controls and the telecom warning

    This is the segment where Huang is visibly fired up. He attacks the GPUs-as-atomic-bombs framing on its face. NVIDIA GPUs power medical imaging, video games and soy sauce delivery. A billion people use them. He advocates them to his family. The analogy collapses at the first comparison. He attacks the second framing, that American companies should not compete abroad because they will lose anyway, as a self-fulfilling defeat. Competition makes the company better. The third framing, that depriving the rest of the world of general-purpose computing benefits the United States, also fails on first principles: it benefits one or two American companies at the cost of an entire industry. The cautionary parallel is telecommunications. The United States once had a leading position in telecom fundamental technology and policied itself out of it. Huang’s worry, voiced explicitly to a room of CS students, is that they will graduate into a shell of a computer industry if the same path is repeated.

    AI doom and rational optimism

    In the same arc Huang rejects the science-fiction framing of AI as a singularity that arrives suddenly on a Wednesday at 7pm and ends civilization. He calls those claims irresponsible, says they are not true, and points out that the people advancing them are believed by audiences who then make policy on that basis. It is not true that no one understands how these systems work. It is not true that intelligence becomes infinitely powerful instantaneously. It is not true that there is no defense. His framing, which the host echoes as “rational optimism,” is that the goal is to create a future where people care about computers because the technology students are learning is worth mastering.

    Stanford’s compute problem is Stanford’s fault

    A student presses on the scarcity of compute for independent researchers, startups and universities inside the United States. Huang’s answer is sharp: there is no shortage. Place the order and the chips will arrive. The actual broken thing is institutional. University grants are fragmented across departments. No researcher can raise enough on a single grant to fund a billion-dollar shared cluster, and no one shares. He compares it to showing up at the grocery store demanding a billion dollars of tomatoes today. The solution is planning, aggregation and a campus-scale supercomputer, the way Stanford once built the linear accelerator. The endowment is $40 billion. Pulling a billion off it, contracting cloud capacity and giving every student and researcher AI supercomputer access is, in Huang’s view, obviously doable. When he says “it is Stanford’s fault” the host laughs, but Huang clarifies: if it is your fault you have the power to fix it.

    Career, suffering and resilience

    Asked how a CS student should spend the next few years, Huang pushes back on the standard “follow your passion” advice. Most people do not know what they love yet, because no one knows what they do not know. The bar of demanding joy from every working day is too high. Whatever the job is, do it as well as you can. Even as CEO of NVIDIA he says he genuinely loves about 10 percent of his work. The other 90 percent is hard and he suffers through it. He recommends suffering on purpose, because resilience is a muscle that only builds under load, and when the company, the team or the family needs that muscle, it has to already exist. Earlier in his life that meant cleaning toilets and busing tables at Denny’s. He does it today running a multi-trillion-dollar company.

    The biggest mistakes

    Huang separates technical mistakes from strategic mistakes. NVIDIA’s first generation of products was technically wrong in almost every way: curved surfaces instead of triangles, no Z-buffer, forward instead of inverse texture mapping, no floating point inside. The company wasted two and a half years. But the strategic genius of the recovery, the reading of the market, the conservation of resources and the reapplication of talent, is what taught him strategy. The clean strategic mistake he names is mobile. NVIDIA’s Tegra line grew to a billion dollars of revenue and then collapsed to zero when Qualcomm’s modem dominance locked NVIDIA out of the 3G to 4G transition. Huang explicitly refuses the comforting rationalization that the Tegra effort fed the Thor automotive chip (“Thor is the great great great grandson”). The original decision, he says, was a waste of time. The lesson is to think one or two clicks further about whether a market is structurally winnable before committing the company.

    Forecasting under fog of war

    The final substantive exchange is on forecasting. Huang’s method has four steps. Observe what is actually happening (AlexNet crushing two decades of computer vision research in one shot, GPT producing reasoning by token generation). Reason from first principles about why it works. Ask “so what” and “what next” recursively until a mental model of the future emerges. Place the company inside that future and work backwards. Crucially, expect to be partly wrong. Some outcomes will absolutely happen, some will likely happen, some might happen, and the strategy has to be robust across that distribution. The real cost of any strategic choice is the opportunity cost of the alternatives you did not take, so the discipline is to minimize that cost and maximize optionality while letting the journey itself pay for the journey.

    Thoughts

    The most useful thing in this conversation is the explicit architectural mapping of compute patterns to chip generations. Hopper for pre-training. Grace Blackwell NVLink72 for inference, because decode is bandwidth-bound and a single chip cannot supply it. Vera Rubin for agents, because tool calls stall multi-billion-dollar GPU systems and so the CPU has to be optimized for low-latency single-threaded code. Feynman for swarms. That sequence is not marketing. It is a falsifiable thesis about where the bottleneck moves next, and every other infrastructure company should be measuring themselves against it. If Huang is right that swarms of sub-agents are the next dominant pattern, then the design pressure shifts from raw flops to fabric topology, memory hierarchy and storage-to-GPU latency. That has implications for everyone downstream, including the hyperscalers building competing accelerators.

    The MFU section is the most intellectually generous moment in the talk. The instinct in the AI ops community has been to chase MFU as if it were a virtue. Huang argues, persuasively, that low MFU is consistent with high tokens per watt in a disaggregated inference setup, and that bottlenecks rotate fast enough that over-provisioning every resource is the rational design. That reframing matters because it changes what “scarce” means. Compute is not scarce in the way the discourse treats it. What is scarce is a coherent system designed end-to-end. The xAI 11 percent number, in that frame, is not embarrassing. It is the natural reading of a workload that is mostly decode.

    The Stanford segment is the part most likely to be quoted out of context. “It’s Stanford’s fault” is a deliberately provocative line, but the underlying claim is correct and load-bearing. Compute is not gated by NVIDIA refusing to ship chips. It is gated by the fact that fragmented grant funding cannot aggregate into the billion-dollar order that NVIDIA can fulfill. The implication is that universities and national labs need a structural change in how they pool capital for compute, and that the current model of every researcher buying a handful of cards is genuinely obsolete. Huang’s nudge about pulling a billion off the endowment is concrete enough to be acted on, and other major research universities should read this segment as a direct prompt.

    The geopolitical segment is the highest-stakes one. The telecommunications comparison is correct as a historical pattern, and Huang is one of the very few executives in a position to deliver that warning credibly. The unresolved tension is that the argument applies symmetrically. If American AI dominance is built by selling globally, that includes selling into adversarial states, and the policy question is where the line falls. Huang does not answer that question. He attacks the framing that lets the question be answered badly. That is a meaningful contribution to the discourse even if it does not resolve the underlying tradeoff.

    The career advice section is the part the social-media clips will mishandle. “Seek suffering” reads as macho when extracted. In context it is a specific operational claim about how resilience compounds, and it is paired with the Tegra story where Huang himself paid the price of not thinking one more click ahead. That kind of self-implication is rare in CEO talks, and it is the reason the talk is worth listening to in full rather than only reading the recap.

    Watch the full Stanford CS153 Frontier Systems conversation with Jensen Huang here.

  • Krishna Rao on Anthropic Going From 9 Billion to 30 Billion ARR in One Quarter and the Compute Strategy Powering Claude

    Krishna Rao, Chief Financial Officer of Anthropic, sat down with Patrick O’Shaughnessy on Invest Like the Best for one of the most detailed public looks yet at the operating engine behind Claude. He covers how Anthropic compounded from $9 billion of run rate revenue at the start of the year to north of $30 billion by the end of Q1, why he spends 30 to 40 percent of his time on compute, the playbook for buying gigawatts of AI infrastructure across Trainium, TPU, and GPU platforms, how Anthropic prices its models, why returns to frontier intelligence keep climbing, and what the Mythos release tells us about the cyber capabilities of the next generation of Claude.

    TLDW

    Anthropic is running the most compute fungible frontier lab in the world, with active deployments across AWS Trainium, Google TPU, and Nvidia GPU, and an internal orchestration layer that lets a chip serve inference in the morning and run reinforcement learning the same evening. Krishna Rao explains the cone of uncertainty that governs gigawatt scale compute procurement, the floor Anthropic refuses to drop below on model development compute, the Jevons paradox unlock from cutting Opus pricing, the 500 percent annualized net dollar retention from enterprise customers, the layer cake of long term deals with Google, Broadcom, Amazon, and the recent xAI Colossus tie up in Memphis, the phased release of the Mythos model in response to spiking cyber capabilities, the internal use of Claude Code to produce statutory financial statements and run a Monthly Financial Review skill, and why the team believes scaling laws are alive and well. The interview also covers fundraising history through Series D and Series E, the $75 billion already raised plus another $50 billion coming, talent density beating talent mass during the Meta poaching wave, and Rao’s belief that biotech and drug discovery represent the most exciting frontier for AI.

    Key Takeaways

    • Anthropic entered the year with about $9 billion of run rate revenue and ended the first quarter with north of $30 billion of run rate revenue, a more than 3x leap driven by model intelligence gains and the products built around them.
    • Compute is described as the lifeblood of the company, the canvas everything else is built on, and the most consequential class of decisions Rao makes. Buy too much and you go bankrupt. Buy too little and you cannot serve customers or stay at the frontier.
    • Rao spends 30 to 40 percent of his time on compute, even today, and the leadership team meets repeatedly on both procurement and ongoing compute allocation.
    • Anthropic is the only frontier language lab actively using all three major chip platforms in production: AWS Trainium, Google TPU, and Nvidia GPU. It is also the only major model available on all three clouds.
    • Flexibility is the central design principle. Anthropic builds flexibility into the deals themselves, into the orchestration layer that maps workloads to chips, and into compilers built from the chip level up.
    • The cone of uncertainty frames procurement. Small differences in weekly or monthly growth compound into wildly different two year outcomes, so the team plans across a range of scenarios rather than a single point estimate, and ranges toward the upper end while protecting downside.
    • Compute allocation across the company sits in three buckets: model development and research, internal employee acceleration, and external customer serving. A non negotiable floor protects model development even when customer demand is tight.
    • Anthropic estimates that if it cut off internal employee use of its own models, the freed compute could serve billions of dollars of additional revenue. It chooses not to, because internal use compounds into better future models.
    • Intelligence is multi dimensional, not a single IQ score. Anthropic measures real world capability through customer feedback, long horizon task performance, tool use, computer use, and speed at agentic tasks, not just leaderboard benchmarks that have largely saturated.
    • Each Opus generation, 4 to 4.5 to 4.6 to 4.7, delivers both capability improvements and an efficiency multiplier on token processing. New models often serve customers at a fraction of the prior cost while doing more.
    • Reinforcement learning is described as inference inside a sandbox with a reward function, so model efficiency gains directly improve internal RL throughput. The flywheel is tightly coupled.
    • Over 90 percent of code at Anthropic is now written by Claude Code, and a large share of Claude Code itself is written by Claude Code.
    • Anthropic shipped roughly 30 distinct product and feature releases in January and the pace has accelerated since.
    • Scaling laws, in Anthropic’s internal data, are alive and well. The team holds itself to a skeptical scientific standard and still does not see them slowing down.
    • Anthropic recently signed a 5 gigawatt deal with Google and Broadcom for TPUs starting in 2027, plus an Amazon Trainium agreement for up to 5 gigawatts, totaling more than $100 billion in commitments. A significant portion lands this year and next year.
    • A new partnership for capacity at the xAI Colossus facility in Memphis was announced just before the interview, aimed at expanding consumer and prosumer capacity.
    • Pricing has been remarkably stable across Haiku, Sonnet, and Opus. The biggest deliberate change was lowering Opus pricing, which produced a textbook Jevons paradox: consumption rose far faster than the price drop, and the new Opus 4.6 and 4.7 slot in at the same price point.
    • Mythos is the first model Anthropic chose to release in a phased way because of a sharp spike in cyber capability. In an open source codebase where a prior model found 22 security vulnerabilities, Mythos found roughly 250.
    • The Mythos release framework focuses on defensive use first, expands access over time, and is presented as a template for future capability spikes.
    • Anthropic now sells to 9 of the Fortune 10 and reports net dollar retention above 500 percent on an annualized basis. These are not pilots. Rao describes signing two double digit million dollar commitments during a 20 minute Uber ride to the studio.
    • The platform strategy is mostly horizontal. Anthropic will go vertical with offerings like Claude for Financial Services, Claude for Life Sciences, and Claude Security where it can demonstrate the model’s capabilities, but expects most application value to accrue to customers building on top.
    • Investors raised over $75 billion in equity since Rao joined, with another $50 billion in commitments tied to the Amazon and Google deals. Capital intensity is real, but the raises fund the upper end of the cone of uncertainty more than they fund current losses.
    • The Series E close coincided with the day the DeepSeek news broke, forcing investors to reassess their AI thesis in real time. Anthropic closed the round anyway.
    • Inside finance, Claude now produces statutory financial statements for every Anthropic legal entity, with a human checker. A library of more than 70 finance specific skills underpins workflows.
    • A custom Monthly Financial Review skill produces a 90 to 95 percent ready monthly close report, so leadership discussion shifts from reconciling numbers to debating implications.
    • An internal real time analytics platform called Anthrop Stats compresses weekly insight cycles from hours to about 30 minutes.
    • The biggest token user inside Anthropic’s finance team is the head of tax, focused on tax policy engines and workflow automation. The most senior people, not the youngest, are leading internal adoption.
    • Talent density beats talent mass. When Meta and others ran aggressive offer waves, Anthropic lost two people while peer labs lost dozens.
    • All seven Anthropic co founders remain at the company, as does most of the first 20 to 30 employees, which Rao credits to a collaborative, transparent, debate friendly culture and a real culture interview that can veto otherwise top tier candidates.
    • Dario Amodei holds an open all hands every two weeks, writes a short prepared document, and takes unscripted questions from anyone at the company.
    • AI safety investments in interpretability and alignment have a commercial side effect. Looking inside the model helps Anthropic build better models, and enterprises selling sensitive workloads want to trust the lab they hand customer data to.
    • Anthropic explicitly identifies as America first in its approach to model development, and engages closely with the US administration on capability releases such as Mythos.
    • The longer term product vision is the virtual collaborator: an agent with organizational context, access to the company’s tools, persistent memory, and the ability to work on ideas, not just tasks, over long horizons.
    • CoWork, Anthropic’s extension of the Claude Code paradigm into general knowledge work, is being adopted faster than Claude Code itself when indexed to the same point in its launch curve.
    • Anthropic’s product teams ship daily, with a fleet of agents working across the company on specific tasks. Everyone effectively becomes a manager of agents.
    • The dominant downside risks to Anthropic’s high end forecast are slower customer diffusion of model capability into real workflows, scaling laws flattening unexpectedly, and Anthropic losing its position at the frontier.
    • Rao is most excited about biotech and healthcare outcomes, especially the prospect that AI could push drug discovery and lab throughput up 10x or 100x, turning currently incurable diagnoses into treatable ones within a patient’s lifetime.

    Detailed Summary

    Compute as Lifeblood and the Cone of Uncertainty

    Rao opens with the claim that compute is the most important resource at Anthropic, and the most consequential decision class in the company. You cannot buy a gigawatt of compute next week. You have to anticipate demand a year or two in advance, and the cost of being wrong in either direction is high. Buy too much and the unit economics collapse. Buy too little and you cannot serve customers or stay at the frontier, which are described as the same failure mode. To navigate this, the team uses a cone of uncertainty rather than point estimates. Small differences in weekly growth compound into vastly different two year outcomes, and Anthropic tries to position itself toward the upper end of that cone while preserving optionality. Rao notes he has had to consciously break a lifetime of linear thinking and force himself into exponential models.

    Three Chip Platforms, One Orchestration Layer

    Anthropic uses Amazon’s Trainium, Google’s TPUs, and Nvidia’s GPUs fungibly. That was not free. Adopting TPUs at scale started around the third TPU generation, when outside observers thought it was a strange choice. Anthropic invested years into compilers and orchestration so workloads can flow across chips by generation and by job type. The team works deeply with Annapurna Labs at AWS to influence Trainium roadmaps because Anthropic stresses these chips harder than almost anyone. The result is what Rao believes is the most efficient utilization of compute across any frontier lab, with a dollar of compute going further inside Anthropic than anywhere else.

    Three Buckets and the Model Development Floor

    Compute gets allocated across model development, internal acceleration of employees, and customer serving. The conversations are collaborative rather than zero sum, but there is a hard floor on model development that the company refuses to cross even if it makes customer demand harder to serve in the short term. The thesis is simple. The returns to frontier intelligence are extremely high, especially in enterprise, so cutting model investment to chase near term revenue is a bad trade. Internal employee use is also explicitly protected. Rao notes that diverting that internal usage to external customers would unlock billions of additional revenue today, but the compounding benefit of accelerating researchers and engineers outweighs that.

    Intelligence Is Multi Dimensional

    Rao pushes back hard on the IQ framing of model progress. Benchmarks saturate quickly, and the real signal comes from how customers actually use the models. Anthropic looks at long horizon task completion, tool use, computer use, and time to result on agentic tasks. Two equally capable agents who differ only in speed produce dramatically different value, because the faster one compounds into more attempts and more outcomes. Frontier model leaps are also fuel efficient. The sedan to sports car analogy breaks down because each Opus generation, 4 to 4.5 to 4.6 to 4.7, delivers a step up in capability and a multiplier on per token efficiency.

    From 9 Billion to 30 Billion ARR in One Quarter

    The headline number for the quarter is a leap from about $9 billion of run rate revenue to over $30 billion, accomplished without onboarding a corresponding step up in compute, because new compute lands on ramps locked in 12 months prior. Rao attributes the leap to model capability gains, products that surface that intelligence in usable form factors, and an enterprise customer base that pulls more workloads onto Claude as each generation unlocks new use cases. Coding started the wave with Sonnet 3.5 and 3.6, and the same pattern is now playing out elsewhere in the economy.

    Recursive Self Improvement and Talent Density

    Over 90 percent of Anthropic’s code is now written by Claude Code, including most of Claude Code itself. Rao describes this as a structural reason to keep allocating internal compute to employees even when external demand is hungry. Recursive self improvement is not happening through models that need no humans. It is happening through researchers who set direction and use frontier models to compress months of work into days. Talent density beats talent mass. When Meta and other labs went after Anthropic researchers with very large packages, Anthropic lost two people while peer labs lost dozens.

    Procurement Strategy and the Layer Cake

    Compute lands as a layer cake. Last month Anthropic signed a 5 gigawatt TPU deal with Google and Broadcom starting in 2027, alongside an Amazon Trainium agreement for up to 5 gigawatts. The total is north of $100 billion in commitments. A new tie up with xAI’s Colossus facility in Memphis was announced just before the interview, intended for nearer term capacity to support consumer and prosumer growth. Anthropic evaluates near term and long term compute deals against the same set of variables: price, duration, location, chip type, and how efficiently the team can run it. The relationships are deeper than procurement. The hyperscalers are also distribution channels for the model.

    Platform First, Selective Vertical Bets

    Rao describes Anthropic as a platform first business, with most expected value accruing to customers building on the platform. The team will only go vertical when it can either demonstrate capabilities that are skating to where the puck is going, like Claude Code did before the models could fully support it, or when it wants to set a template for an industry vertical, as with Claude for Financial Services, Claude for Life Sciences, and Claude Security. He acknowledges that surprise capability jumps make customers anxious about the platform competing with them, and frames Anthropic’s mitigation as deeper partnerships, early access programs, and an emphasis on accelerating customer building rather than disintermediating it.

    Pricing, Jevons Paradox, and Return on Compute

    Pricing across Haiku, Sonnet, and Opus has been stable. The notable exception is Opus, which Anthropic deliberately repriced lower when launching Opus 4.5 because Opus class problems were being squeezed into Sonnet workloads. Efficiency gains made it possible to serve Opus profitably at the new level. The consumption response was a classic Jevons paradox, with usage rising far more than the price reduction would have predicted, and Opus 4.6 then slotted in at the same price with a capability bump. Margins are not framed as a per token markup. Compute is fungible across model development, internal acceleration, and customer serving, so Anthropic measures return on the entire compute envelope rather than software style variable cost per call.

    Fundraising, DeepSeek, and Capital Intensity

    Rao joined while Anthropic was closing its Series D, mid frontier model launch and during the FTX share liquidation. Investors initially questioned whether Anthropic needed a frontier model, whether AI safety and a real business could coexist, and why the sales team was so small. The Series E closed the same day the DeepSeek news broke, with markets violently re pricing AI in real time. Since Rao joined, Anthropic has raised over $75 billion, with another $50 billion tied to the Amazon and Google compute deals. The reason for the size of the raises is the cone of uncertainty, not current losses. Returns on compute today are described as robust.

    Mythos, Cyber Capability, and Phased Releases

    The Mythos release marks the first time Anthropic shipped a model under a deliberately phased rollout because of a specific capability spike. Cyber is the dimension that spiked. Where a prior model found 22 vulnerabilities in an open source codebase, Mythos found roughly 250. The defensive applications, automatically patching massive codebases, are genuinely valuable, but the offensive risk is real enough that Anthropic chose to release to a smaller group first and expand access over time. Rao positions this as a template for future capability spikes, not a permanent restriction. He also describes the relationship with the US administration as cooperative, including the Department of War interaction, with Anthropic supporting a regulatory framework that does not strangle innovation but takes responsibility seriously.

    Claude Inside Finance

    Anthropic’s finance team is one of the strongest internal case studies. Statutory financial statements for every legal entity are produced by Claude, with a human reviewer. A skill library of more than 70 finance specific skills underpins a Monthly Financial Review skill that drafts the monthly close at 90 to 95 percent ready, so leadership meetings shift from explaining the numbers to discussing what to do about them. An internal analytics platform called Anthrop Stats compresses weekly insight cycles from hours to 30 minutes. The biggest internal token user in finance is the head of tax, building policy engines, which Rao highlights as evidence that adoption is driven by the most senior people, not just younger engineers.

    Culture, Co Founders, and the Race to the Top

    Seven co founders should not, on paper, work as a leadership group. Rao argues it works because the culture was set early around collaboration, intellectual honesty, transparency, and humility. The culture interview is a real veto, not a checkbox. Dario Amodei runs an all hands every two weeks with a short written piece followed by unscripted questions, and decisions, once made, get clean alignment rather than residual politics. Anthropic frames its approach as a race to the top, where being a model for how to build the technology responsibly is itself a recruiting and retention advantage.

    The Virtual Collaborator and the Frontier Ahead

    The product vision Rao describes is the virtual collaborator. Not just a smarter chatbot, but an agent with organizational context, access to the company’s tools, memory, and the ability to work on ideas over long horizons. Coding was the first domain to feel this, but CoWork, Anthropic’s extension of the Claude Code pattern into general knowledge work, is being adopted faster than Claude Code was at the same age. Product development inside Anthropic already looks different. Teams ship daily, with fleets of agents working across the company, and individual humans increasingly act as managers of those fleets.

    Downside Risks and What Excites Him Most

    The three risks Rao names if asked to do a premortem on a softer year are slower customer diffusion of model capability into real workflows, scaling laws unexpectedly flattening, and Anthropic losing its frontier position to competitors. None of these are observed today, but he is unwilling to claim them with certainty. On the upside, he is most excited about biotech and healthcare. Lab throughput rising 10x or 100x, paired with AI assisted clinical workflows, could turn currently incurable diagnoses into treatable ones within a patient’s lifetime. That is the outcome he wants the technology to chase.

    Thoughts

    The most consequential structural point in this interview is the framing of compute as a single fungible resource pool measured by return on the entire envelope, not as a variable cost per inference call. That accounting shift, if you accept it, breaks most of the bear cases about AI lab unit economics. The bear argument almost always assumes that a token served to a customer is the only thing the chip did that day. Rao’s version is that the same fleet trains models in the morning, runs reinforcement learning at lunch, serves customers in the afternoon, and accelerates internal engineers in the evening. If even half of that is real, the right comparison is total compute spend versus total enterprise value created by the platform, and on that ratio Anthropic looks structurally strong rather than weak.

    The Jevons paradox on Opus pricing is the most actionable insight for anyone running an AI product. Most teams default to either chasing premium pricing on the newest model or undercutting to chase volume. Anthropic did something more disciplined: it left Sonnet and Haiku alone, dropped Opus when efficiency gains made it serveable, and watched aggregate usage rise faster than the price cut. The lesson is that frontier model pricing is not really a price problem. It is a capability access problem, and elasticity around the right tier is much higher than the standard SaaS playbook implies.

    The Mythos cyber jump deserves more attention than it has gotten. Going from 22 to 250 vulnerabilities found in the same codebase is the kind of capability discontinuity that genuinely changes the regulatory calculus. Anthropic is signaling that it can identify these discontinuities ahead of release and choose a deployment shape that respects them. Whether peer labs adopt similar discipline is the open question. Anthropic’s race to the top framing assumes they will be forced to. The competitive market may say otherwise.

    The hiring data point is the most underrated investor signal. Two departures while peer labs lost dozens, during the most aggressive talent war in tech history, is not a culture poster. It is a structural advantage that compounds every time another lab tries to buy its way to the frontier. Money can be matched. Conviction in the mission, transparent leadership, and a culture interview that can veto otherwise stellar candidates cannot. If you believe scaling laws hold, talent retention at this density is one of the few moats that actually scales with capital.

    Finally, the most interesting personal admission is that Krishna Rao, a finance leader trained at Blackstone and Cedar, is openly telling investors that linear thinking is the failure mode he had to break out of. The companies that pattern match this moment to prior technology waves are mispricing it, in both directions. The cone of uncertainty Anthropic uses internally is the right metaphor for everyone else too. If you are forecasting AI as if it is cloud in 2010, you are almost certainly wrong, and the magnitude of the error is much larger than it would be in any prior era.

    Watch the full conversation with Krishna Rao on Invest Like the Best here.

  • Elad Gil on the AI Frontier: Compute Constraints, the Personal IPO, and Why Most AI Founders Should Sell in the Next 12 to 18 Months

    Elad Gil sat down with Tim Ferriss for a wide ranging conversation that pairs almost perfectly with his recent Substack post Random thoughts while gazing at the misty AI Frontier. Together, the podcast and the post lay out the cleanest framework I have seen for what is actually happening in AI right now: a Korean memory bottleneck capping every lab, a class wide personal IPO across the research community, the fastest revenue ramps in capitalist history, and a brutal dot com style culling that most founders do not yet want to admit is coming. Below is a complete breakdown.

    TLDW (Too Long, Didn’t Watch)

    Elad Gil argues that AI is producing the fastest revenue ramps in capitalist history while setting up the same brutal power law that wiped out 99 percent of dot com companies. OpenAI and Anthropic each sit at roughly 0.1 percent of US GDP today, on a path to 1 percent of GDP run rate by end of 2026, which is insanely fast by any historical standard. The current ceiling on capabilities is not chips but Korean high bandwidth memory, and that constraint will likely hold all major labs roughly comparable in capability through 2028. Talent has just experienced a class wide personal IPO via Meta led bidding, with packages running tens to hundreds of millions per researcher. Most AI companies should consider exiting in the next 12 to 18 months while the tide is high. Right now consensus is correct. Save the contrarianism for later.

    Key Takeaways

    • OpenAI and Anthropic are each at roughly 0.1 percent of US GDP. With US GDP near 30 trillion dollars and each lab at a roughly 30 billion dollar revenue run rate, AI has gone from essentially zero to 0.25 to 0.5 percent of GDP in just a few years. If the labs hit 100 billion in run rate by year end 2026 (which many expect), AI hits 1 percent of GDP run rate inside a single year.
    • The AI personal IPO is real. 50 to a few hundred AI researchers across multiple companies just experienced a class wide IPO event due to Meta led bidding, with top packages reportedly tens to hundreds of millions per person. The closest historical analog is early crypto holders around 2017.
    • The bottleneck is Korean memory, not Nvidia chips. High bandwidth memory from Hynix, Samsung, Micron, and others is the binding constraint. Expected to hold roughly two years. After that, power and data center buildout become the next walls.
    • No lab can pull dramatically ahead before 2028. Because every lab is compute constrained on the same input, OpenAI, Anthropic, Google, xAI, and Meta should remain roughly comparable in capability through that window, absent an algorithmic breakthrough that stays inside one lab.
    • Compute is the new currency. Token budgets now define what an engineer can accomplish, what a company can spend, and what business models are viable. Some companies (neoclouds, Cursor) are effectively inference providers disguised as tools.
    • The dot com base rate is the AI base rate. Around 1,500 to 2,000 companies went public in the late 1990s internet cycle. A dozen or two survived. AI will likely look the same.
    • Most AI founders should consider selling in the next 12 to 18 months. If you are not in the durable handful, this is your value maximizing window. A handful of companies (OpenAI, Anthropic) should never sell.
    • Buyers are bigger than ever. One percent of a 3 trillion dollar market cap is 30 billion dollars. That math makes massive AI acquisitions trivial for hyperscalers, vertical incumbents, and adjacent giants.
    • Underrated exit path: merger of equals. Two private AI competitors destroying each other on price should consider just merging. PayPal and X.com did exactly this in the 1990s.
    • 91 percent of global AI private market cap sits in a 10 by 10 mile square. If you want to do AI, move to the Bay Area. Remote work for cluster industries is BS.
    • Want money? Ask for advice. Want advice? Ask for money. The inverse also works: offering useful advice frequently leads to inbound investment opportunities.
    • AI is selling units of labor, not software. The shift is from selling seats and tools to selling cognitive output. This is why Harvey can win in legal, where decades of legal SaaS failed.
    • AI eats closed loops first. Tasks that can be turned into testable closed loop systems (code, AI research) get automated fastest. Map jobs on a 2×2 of closed loop tightness vs economic value to see where AI hits soonest.
    • Headcount will flatten at later stage companies. Multiple late stage CEOs told Elad they will not do big AI layoffs but will simply stop growing headcount even as revenue grows 30 to 100 percent. Hidden layoffs are also hitting outsourcing firms in India and the Philippines first.
    • The Slop Age could be the golden era of AI plus humanity. AI produces useful slop at volume, humans desloppify it, leverage is high, and the work is fun. This window may close as AI gets superhuman.
    • Market first, team second (90 percent of the time). Great teams die in bad markets. The exception is when you meet someone truly exceptional at the very earliest stage.
    • The one belief framework. If your investment memo needs three core beliefs to be true, it is too complicated. Coinbase was an index on crypto. Stripe was an index on e-commerce. That was the entire memo.
    • The four year vest is a relic. It exists because in the 1970s companies actually went public in four years. Today the private window has stretched to 20 years and venture has eaten what used to be public market growth investing.
    • Boards are in-laws. You cannot fire investor board members. Take a worse price for a better board member, because as Naval Ravikant said, valuation is temporary, control is forever.
    • Right now, consensus is correct. Save the contrarianism. The smart move is to just buy more AI exposure rather than try to outsmart the obvious.
    • Distribution wins more than founders admit. Google paid hundreds of millions to push the toolbar. Facebook bought ads on people’s own names in Europe. TikTok spent billions on user acquisition. Allbirds (yes, the shoe company) just raised a convert to build a GPU farm.
    • Anti-AI sentiment will get worse before it gets better. Maine banned new data centers. There has been violence directed at AI leaders. Expect more political and activist backlash, especially as AI is blamed for harms it has not yet caused while its benefits are mismeasured.
    • Use AI as a cold reader. Elad uploads photos of founders to AI models with cold reading prompts and reports surprisingly accurate personality assessments based on micro features.

    Detailed Summary

    The Numbers Are Insane and Mostly Underappreciated

    The most stunning data point in either source is the GDP math. US GDP is roughly 30 trillion dollars. OpenAI and Anthropic are each rumored to be at roughly 30 billion dollars in revenue run rate, putting each one at 0.1 percent of US GDP. Add cloud AI revenue and the picture gets stranger: AI has grown from essentially zero to between 0.25 and 0.5 percent of GDP in only a few years. If the labs hit 100 billion in run rate by year end 2026, AI will be at roughly 1 percent of GDP run rate inside a single year. There is no historical analog for that pace. Elad notes that productivity gains from AI may end up mismeasured the way internet productivity was undercounted in the 2000s, which would have downstream consequences for regulation: AI gets blamed for the bad (job losses) and credited for none of the good (new jobs, education gains, healthcare improvements). His half joking aside is that the real ASI test may be the ability to actually measure AI’s economic impact.

    The AI Personal IPO

    The most underdiscussed phenomenon in AI right now, according to Elad, is what he calls a class wide personal IPO. When a company IPOs, a subset of employees become wealthy, lose focus, and either start companies, get into politics, fund passion projects, or check out. Meta started aggressively bidding for AI talent. Other major labs had to match. The result was 50 to a few hundred researchers, scattered across multiple labs, suddenly receiving compensation in the tens to hundreds of millions of dollars range. The only historical analog Elad can think of is early crypto holders around 2017. Some chunk of these newly wealthy researchers will redirect attention to AI for science, side projects, or quiet quitting. The aggregate field stays mission aligned, but the distribution of attention has shifted.

    The Korean Memory Bottleneck

    Every major AI lab today is building giant Nvidia clusters paired with high bandwidth memory primarily from Korean fabs and a few other suppliers. They run massive amounts of data through these clusters for months, and the output is, almost absurdly, a single flat file containing what amounts to a compressed version of human knowledge plus reasoning. Right now, the binding constraint on this whole stack is HBM memory from Hynix, Samsung, Micron, and others. Korean memory fab capacity has been below the capacity of every other piece of the system. Elad estimates this constraint persists for roughly two years. After that, the next walls are likely data center construction and power. The strategic implication is enormous. While memory constrains everyone, no single lab can buy 10x the compute of its rivals, so capabilities should stay roughly comparable across the major labs. Once that constraint lifts, possibly around 2028, one player could theoretically pull dramatically ahead, especially if AI assisted AI research closes a self improvement loop inside one lab.

    Compute Is the New Currency

    The blog post sharpens a framing that runs throughout the podcast: compute, denominated in tokens, is now a unit of economic value. Token budgets define what an engineer can accomplish, what a company can spend, and what business models work. Some companies are effectively inference providers wearing tool costumes. Neoclouds are the cleanest example. Cursor is another, subsidizing inference as a user acquisition strategy. The most absurd recent example: Allbirds, the shoe company, raised a convertible to build a GPU farm. Whether this becomes the AI version of Microstrategy’s Bitcoin trade or a cautionary tale, it tells you where the cost of capital believes the next decade is going.

    The Dot Com Survival Math

    Elad walks through the brutal arithmetic that AI founders should be internalizing. In the late 1990s and early 2000s, somewhere between 1,500 and 2,000 internet companies went public. Of those, roughly a dozen or two survived in any meaningful form. Every cycle has looked like this: automotive in the early 1900s, SaaS, mobile, crypto. There is no reason AI will be different. Most current AI companies, including those ramping revenue today, will see the market, competition, and adoption turn on them. The question every AI founder should be asking is whether they are in the durable handful or not.

    Most AI Companies Should Consider Exiting in the Next 12 to 18 Months

    This is the most actionable and most uncomfortable take in either source. While the tide is rising, every AI company looks unstoppable. Whether they actually are, in a 10 year frame, is a separate question. Founders running successful AI companies should take a cold honest look at whether the next 12 to 18 months is their value maximizing window. Companies typically have a 6 to 12 month peak before some headwind hits, often visible in the second derivative of growth. The best signal that you should sell is when growth rate is starting to plateau and you can see why. A handful of companies (OpenAI, Anthropic, the durable winners) should never exit. Many others should, while everything is still on the upswing.

    What Makes an AI Company Durable

    Elad lays out four lenses for evaluating durability at the application layer:

    1. Does your product get dramatically better when the underlying model gets better, in a way that keeps customers loyal?
    2. How deep and broad is the product? Are you building multiple integrated products embedded in actual workflows?
    3. Are you embedded in real change management at the customer? AI adoption is mostly a workflow change problem, not a tech problem. Workflow embedding is durable.
    4. Are you capturing and using proprietary data in a way that creates a system of record? Data moats are often overstated, but sometimes real.

    At the lab layer, Elad believes OpenAI, Anthropic, and Google are durable absent disaster. He predicted three years ago that the foundation model market would settle into an oligopoly aligned with cloud, and that prediction has roughly held.

    Selling Work, Not Software

    The deepest structural insight in the conversation is that generative AI is shifting what software companies sell. The old model was selling seats, tools, and SaaS subscriptions. The new model is selling units of cognitive labor. Zendesk sold seats to support reps. Decagon and Sierra sell agentic support output. Harvey can win in legal even though selling to law firms was historically considered terrible business, because Harvey is not selling tools, it is augmenting lawyer output. This shift opens markets that were previously closed and dramatically grows tech TAMs. It is also why founder limited theories of entrepreneurship currently understate how many opportunities exist.

    AI Eats Closed Loops First

    One of the cleanest mental models in the blog post is the closed loop framework. AI automates first what can be turned into a testable closed loop. Code is the canonical example: outputs can be tested, errors detected, models can iterate. AI research is similar. Both have tight feedback loops and high economic value, which puts them at the top of the AI impact ranking. Map jobs on a 2×2 of closed loop tightness vs economic value and you can see where AI hits soonest. The interesting forward question is which jobs become more closed loop next. Data collection and labeling will keep growing in every field as a result.

    The Harness Matters More Than People Think

    For coding tools and increasingly for enterprise applications, what Elad calls the harness, the wrapper of UX, prompting, workflow integration, and brand around the underlying model, is becoming sticky. It is not just which model you call. It is the environment built around it. Cursor and Windsurf demonstrate this in coding. The interesting open questions are what the harness looks like for sales AI, for AI architects, for analyst workflows. Those gaps leave room for startups even as model capabilities converge.

    Hidden Layoffs and the Developing World

    Most announced AI driven layoffs are probably just COVID era overhiring corrections wrapped in a more flattering narrative. But real AI driven labor displacement is happening, and it is hitting outsourcing firms first. That means countries like India and the Philippines, where many outsourced services jobs sit, are likely to be the most impacted earliest. Several developing economies built their growth ladders on services exports. If AI takes those jobs first, the migration and economic patterns of the next decade may shift in ways nobody is yet planning for.

    The Flat Company

    Multiple late stage CEOs told Elad they will not announce big AI layoffs. Instead, they will simply stop growing headcount. If revenue grows 30 to 100 percent, headcount stays flat or shrinks via attrition. Existing employees become dramatically more productive. The very best people who can leverage AI will see compensation inflate. Sales and some growth engineering keep hiring. Almost everything else flatlines. This is mostly a later stage and public company phenomenon. True early stage startups should still scale aggressively after product market fit, just with more leverage per person.

    Exit Options for AI Founders

    Elad lays out four exit categories. First, the labs and hyperscalers themselves: Apple, Amazon, Google, Microsoft, Meta. Second, vertical incumbents like Thomson Reuters for legal or healthcare giants for clinical AI. Third, the underrated category of merger of equals between two private AI competitors who are currently destroying each other on price. PayPal and X.com did this in the 1990s. Uber and Lyft reportedly almost did. Fourth, large adjacent tech companies: Oracle, Samsung, Tesla, SpaceX, Snowflake, Databricks, Stripe, Coinbase. The market cap math has changed in a way that makes acquisition trivial. One percent of a three trillion dollar market cap is 30 billion dollars, which means a hyperscaler can do massive acquisitions almost casually.

    Geographic Concentration Is Extreme

    Elad’s team analyzed where private market cap aggregates. Historically half of global tech private market cap sat in the US, with half of that in the Bay Area. With AI, 91 percent of global AI private market cap is in a single 10 by 10 mile square in the Bay Area. New York is a distant second and then it falls off a cliff. For defense tech, the cluster is Southern California (SpaceX, Anduril, El Segundo, Irvine). Fintech and crypto skew toward New York. The remote everywhere advice is, Elad says, just BS for anyone trying to break into an industry cluster.

    How Elad Got Into His Best Deals

    Stripe started with Elad cold emailing Patrick Collison after selling an API company to Twitter. A couple of walks later, Patrick texted that he was raising and Elad was in. Airbnb came from helping the founders raise their Series A and being asked at the end if he wanted to invest. Anduril came from noticing that Google had shut down Project Maven and asking if anyone was building defense tech, then meeting Trey Stephens at a Founders Fund lunch. Perplexity came from Aravind Srinivas cold messaging him on LinkedIn while still at OpenAI. Across all of these, the pattern is the same: be in the cluster, be helpful, be talking publicly about technology nobody else is talking about, and be useful to founders before any money is on the table.

    The One Belief Framework

    Investors love complicated 50 page memos. Elad believes the actual decision usually collapses into a single core belief. Coinbase: this is an index on crypto, and crypto will keep growing. Stripe: this is an index on e-commerce, and e-commerce will keep growing. Anduril: AI plus drones plus a cost plus model will be important for defense. If your thesis needs three things to be true, it is probably not going to work. If it needs nothing, you have no thesis.

    Boards as In-Laws

    Elad emphasizes that founders should treat board composition like one of the most important hiring decisions of the company. You cannot fire an investor board member. They have contractual rights. So if you are going to be stuck with someone for a decade, take a worse valuation for a better human. Reid Hoffman’s framing is that the best board member is a co-founder you could not have otherwise hired. Naval Ravikant’s framing is that valuation is temporary but control is forever. Elad recommends writing a job spec for every board seat.

    The Slop Age as a Golden Era

    One of the warmest takes in the blog post is the framing of the current moment as the Slop Age, and the suggestion that this might actually be the golden era of AI plus humanity. Before the last few years, AI was inaccessible and narrow. Eventually AI may become superhuman at most tasks. Today, AI produces useful slop at volume, which means humans are still needed to desloppify the slop, but the leverage on time and ambition is real. That makes the work fun. If AI displaces people or starts doing more interesting work, this golden moment fades. Elad also notes the obvious counter, that the era of human generated internet slop preceded the AI slop era. AGI may end the slop age, or alternately may be the thing that finally cleans up all the prior waves of human slop.

    Anti-AI Regulation and Violence Will Increase

    This is one of the more sobering threads in the blog post. Real world AI driven labor displacement has been small so far, but anti-AI sentiment is already strong and growing. Maine just banned new data centers. There has been actual violence directed at AI leaders, including a recent attack on Sam Altman. Elad’s view is that AI leaders should work harder on optimistic public framing, real political lobbying, and reining in the doom narrative coming from inside the field. Otherwise the regulatory and activist backlash will get much worse, and likely on the basis of mismeasured impacts.

    Right Now Consensus Is Correct

    The headline contrarian take from the episode is that contrarianism right now is wrong. There are moments in time when betting against the crowd pays. This is not one of them. The smart bet is just buying more AI exposure. Trying to find the clever angle, the underlooked hardware play, the secret macro thesis, is overthinking it. Save the contrarian moves for later in the cycle.

    Distribution Almost Always Matters

    Elad pushes back on the founder mythology that great products win on their own. Google paid hundreds of millions of dollars in the early 2000s to distribute its toolbar through every popular app installer on the internet. Facebook bought search ads against people’s own names in European markets to seed network liquidity. TikTok spent billions on user acquisition before its algorithm could lock people in. Snowflake spent enormous sums on enterprise sales and channel partnerships. Sometimes the best product wins. Often the company with the best distribution wins. Founders should plan for both.

    AI as a Cold Reader and a Research Partner

    Two of the more practical AI workflows Elad describes: First, uploading photos of founders to AI models with cold reading prompts that ask the model to identify micro features (crows feet from genuine smiling, brow patterns, posture cues) and infer personality traits, sense of humor, and likely social behavior. He reports the outputs are surprisingly specific. Second, running deep dives across multiple models in parallel (Claude, ChatGPT, Gemini), asking each for primary sources, summary tables, and cross checked data. He recently used this approach to investigate the rise in autism and ADHD diagnoses, concluding that diagnostic criteria shifts and school incentives drive most of it, and noting that maternal age has a stronger statistical association with autism than paternal age, despite paternal age getting all the public discourse.

    The First Ever 10 Year Plan

    For someone who has been compounding aggressively for two decades, Elad has somehow never written a 10 year plan until now. He knows it will not play out as written. The point is that the act of imagining a decade out shifts what you choose to do in the near term. He explicitly rejects the AGI in two years therefore plans are pointless framing as defeatist. There will be interesting things to do regardless of how the AGI timeline plays out.

    Thoughts

    This is one of the more useful AI investor conversations of 2026, mostly because Elad is willing to put numbers and timelines on things that are usually left vague. Pairing the podcast with the underlying Substack post is the right move because the post is where the GDP math, the closed loop framework, and the Slop Age framing actually live. The podcast is where Elad explains how he thinks rather than just what he thinks.

    The 12 to 18 month sell window framing is the most actionable single idea in either source, and probably the most uncomfortable for AI founders sitting on multi billion dollar paper valuations. The math is unforgiving. A dozen winners out of thousands. If you are honest with yourself about whether you are in the dozen, you know what to do.

    The Korean memory bottleneck framing explains a lot of current behavior. The talent wars make more sense once you accept that compute is not going to be the differentiator for two years, so people become the only remaining lever. The convergence of capabilities across OpenAI, Anthropic, Google, and xAI starts to look less like coincidence and more like the structural inevitability of a supply constrained input. The 2028 inflection date is the one to watch.

    Compute as currency is the cleanest reframing in the blog post. Once you start pricing companies in tokens rather than dollars, everything from Cursor’s economics to Allbirds raising a convert to build a GPU farm becomes legible. The interesting question is whether this is a permanent unit of denomination or a transitional one that fades when inference costs collapse.

    The software to labor argument is the structural framing that I think will hold up the longest. Once you internalize that we are not selling seats anymore but selling cognitive output, every vertical that was previously locked behind ugly procurement and IT inertia opens up. Harvey is the proof of concept. There will be 30 more Harveys across every white collar profession.

    The closed loop framework is the cleanest predictor of which jobs get hit hardest and soonest. If you want to know whether your role is exposed, the questions to ask are whether outputs can be machine evaluated, how tight the feedback loop is, and how high the economic value is. The intersection is where AI lands first.

    The geographic concentration data is genuinely shocking. 91 percent of global AI private market cap in a 10 by 10 mile area is the kind of statistic that should make everyone outside that square think very carefully about what game they are playing.

    The Slop Age framing is the most emotionally honest moment in the post. We are in a window where humans still meaningfully add value on top of AI output. That window is finite. Enjoy it.

    The anti-AI backlash thread is the one I think most people in the industry are still underweighting. Maine banning new data centers is a leading indicator, not a one off. The fact that the impacts are likely to be mismeasured by official statistics makes the political dynamics worse, not better. AI will get blamed for harms it did not cause and credited for none of the gains. If the field’s leaders do not start communicating better and lobbying smarter, the regulatory environment in 2028 will be much worse than in 2026.

    Finally, Elad’s first ever 10 year plan stands out as the most quietly important moment in the episode. The implicit message is that even people who have been compounding aggressively for two decades benefit from forcing a longer time horizon onto their thinking. Most plans fail. The act of planning still changes what you do today.

    Read the original Elad Gil post here: Random thoughts while gazing at the misty AI Frontier. Find Elad on X at @eladgil, on his Substack at blog.eladgil.com, and on his website at eladgil.com. Tim Ferriss publishes the full episode at tim.blog/podcast.

  • Dario Amodei on the AGI Exponential: Anthropic’s High-Stakes Financial Model and the Future of Intelligence

    TL;DW (Too Long; Didn’t Watch)

    Anthropic CEO Dario Amodei joined Dwarkesh Patel for a high-stakes deep dive into the endgame of the AI exponential. Amodei predicts that by 2026 or 2027, we will reach a “country of geniuses in a data center”—AI systems capable of Nobel Prize-level intellectual work across all digital domains. While technical scaling remains remarkably smooth, Amodei warns that the real-world friction of economic diffusion and the ruinous financial risks of $100 billion training clusters are now the primary bottlenecks to total global transformation.


    Key Takeaways

    • The Big Blob Hypothesis: Intelligence is an emergent property of scaling compute, data, and broad distribution; specific algorithmic “cleverness” is often just a temporary workaround for lack of scale.
    • AGI is a 2026-2027 Event: Amodei is 90% certain we reach genius-level AGI by 2035, with a strong “hunch” that the technical threshold for a “country of geniuses” arrives in the next 12-24 months.
    • Software Engineering is the First Domino: Within 6-12 months, models will likely perform end-to-end software engineering tasks, shifting human engineers from “writers” to “editors” and strategic directors.
    • The $100 Billion Gamble: AI labs are entering a “Cournot equilibrium” where massive capital requirements create a high barrier to entry. Being off by just one year in revenue growth projections can lead to company-wide bankruptcy.
    • Economic Diffusion Lag: Even after AGI-level capabilities exist in the lab, real-world adoption (curing diseases, legal integration) will take years due to regulatory “jamming” and organizational change management.

    Detailed Summary: Scaling, Risk, and the Post-Labor Economy

    The Three Laws of Scaling

    Amodei revisits his foundational “Big Blob of Compute” hypothesis, asserting that intelligence scales predictably when compute and data are scaled in proportion—a process he likens to a chemical reaction. He notes a shift from pure pre-training scaling to a new regime of Reinforcement Learning (RL) and Test-Time Scaling. These allow models to “think” longer at inference time, unlocking reasoning capabilities that pre-training alone could not achieve. Crucially, these new scaling laws appear just as smooth and predictable as the ones that preceded them.

    The “Country of Geniuses” and the End of Code

    A recurring theme is the imminent automation of software engineering. Amodei predicts that AI will soon handle end-to-end SWE tasks, including setting technical direction and managing environments. He argues that because AI can ingest a million-line codebase into its context window in seconds, it bypasses the months of “on-the-job” learning required by human engineers. This “country of geniuses” will operate at 10-100x human speed, potentially compressing a century of biological and technical progress into a single decade—a concept he calls the “Compressed 21st Century.”

    Financial Models and Ruinous Risk

    The economics of building the first AGI are terrifying. Anthropic’s revenue has scaled 10x annually (zero to $10 billion in three years), but labs are trapped in a cycle of spending every dollar on the next, larger cluster. Amodei explains that building a $100 billion data center requires a 2-year lead time; if demand growth slows from 10x to 5x during that window, the lab collapses. This financial pressure forces a “soft takeoff” where labs must remain profitable on current models to fund the next leap.

    Governance and the Authoritarian Threat

    Amodei expresses deep concern over “offense-dominant” AI, where a single misaligned model could cause catastrophic damage. He advocates for “AI Constitutions”—teaching models principles like “honesty” and “harm avoidance” rather than rigid rules—to allow for better generalization. Geopolitically, he supports aggressive chip export controls, arguing that democratic nations must hold the “stronger hand” during the inevitable post-AI world order negotiations to prevent a global “totalitarian nightmare.”


    Final Thoughts: The Intelligence Overhang

    The most chilling takeaway from this interview is the concept of the Intelligence Overhang: the gap between what AI can do in a lab and what the economy is prepared to absorb. Amodei suggests that while the “silicon geniuses” will arrive shortly, our institutions—the FDA, the legal system, and corporate procurement—are “jammed.” We are heading into a world of radical “biological freedom” and the potential cure for most diseases, yet we may be stuck in a decade-long regulatory bottleneck while the “country of geniuses” sits idle in their data centers. The winner of the next era won’t just be the lab with the most FLOPs, but the society that can most rapidly retool its institutions to survive its own technological adolescence.

    For more insights, visit Anthropic or check out the full transcript at Dwarkesh Patel’s Podcast.

  • Andrej Karpathy on the Decade of AI Agents: Insights from His Dwarkesh Podcast Interview

    TL;DR

    Andrej Karpathy’s reflections on artificial intelligence trace the quiet, inevitable evolution of deep learning systems into general-purpose intelligence. He emphasizes that the current breakthroughs are not sudden revolutions but the result of decades of scaling simple ideas — neural networks trained with enormous data and compute resources. The essay captures how this scaling leads to emergent behaviors, transforming AI from specialized tools into flexible learning systems capable of handling diverse real-world tasks.

    Summary

    Karpathy explores the evolution of AI from early, limited systems into powerful general learners. He frames deep learning as a continuation of a natural process — optimization through scale and feedback — rather than a mysterious or handcrafted leap forward. Small, modular algorithms like backpropagation and gradient descent, when scaled with modern hardware and vast datasets, have produced behaviors that resemble human-like reasoning, perception, and creativity.

    He argues that this progress is driven by three reinforcing trends: increased compute power (especially GPUs and distributed training), exponentially larger datasets, and the willingness to scale neural networks far beyond human intuition. These factors combine to produce models that are not just better at pattern recognition but are capable of flexible generalization, learning to write code, generate art, and reason about the physical world.

    Drawing from his experience at OpenAI and Tesla, Karpathy illustrates how the same fundamental architectures power both self-driving cars and large language models. Both systems rely on pattern recognition, prediction, and feedback loops — one for navigating roads, the other for navigating language. The essay connects theory to practice, showing that general-purpose learning is not confined to labs but already shapes daily technologies.

    Ultimately, Karpathy presents AI as an emergent phenomenon born from scale, not human ingenuity alone. Just as evolution discovered intelligence through countless iterations, AI is discovering intelligence through optimization — guided not by handcrafted rules but by data and feedback.

    Key Takeaways

    • AI progress is exponential: Breakthroughs that seem sudden are the cumulative effect of scaling and compounding improvements.
    • Simple algorithms, massive impact: The underlying principles — gradient descent, backpropagation, and attention — are simple but immensely powerful when scaled.
    • Scale is the engine of intelligence: Data, compute, and model size form a triad that drives emergent capabilities.
    • Generalization emerges from scale: Once models reach sufficient size and data exposure, they begin to generalize across modalities and tasks.
    • Parallel to evolution: Intelligence, whether biological or artificial, arises from iterative optimization processes — not design.
    • Unified learning systems: The same architectures can drive perception, language, planning, and control.
    • AI as a natural progression: What humanity is witnessing is not an anomaly but a continuation of the evolution of intelligence through computation.

    Discussion

    The essay invites a profound reflection on the nature of intelligence itself. Karpathy’s framing challenges the idea that AI development is primarily an act of invention. Instead, he suggests that intelligence is an attractor state — something the universe converges toward given the right conditions: energy, computation, and feedback. This idea reframes AI not as an artificial construct but as a natural phenomenon, emerging wherever optimization processes are powerful enough.

    This perspective has deep implications. It implies that the future of AI is not dependent on individual breakthroughs or genius inventors but on the continuation of scaling trends — more data, more compute, more refinement. The question becomes not whether AI will reach human-level intelligence, but when and how we’ll integrate it into our societies.

    Karpathy’s view also bridges philosophy and engineering. By comparing machine learning to evolution, he removes the mystique from intelligence, positioning it as an emergent property of systems that self-optimize. In doing so, he challenges traditional notions of creativity, consciousness, and design — raising questions about whether human intelligence is just another instance of the same underlying principle.

    For engineers and technologists, his message is empowering: the path forward lies not in reinventing the wheel but in scaling what already works. For ethicists and policymakers, it’s a reminder that these systems are not controllable in the traditional sense — their capabilities unfold with scale, often unpredictably. And for society as a whole, it’s a call to prepare for a world where intelligence is no longer scarce but abundant, embedded in every tool and interaction.

    Karpathy’s work continues to resonate because it captures the duality of the AI moment: the awe of creation and the humility of discovery. His argument that “intelligence is what happens when you scale learning” provides both a technical roadmap and a philosophical anchor for understanding the transformations now underway.

    In short, AI isn’t just learning from us — it’s showing us what learning itself really is.

  • Dwarkesh Patel: From Podcasting Prodigy to AI Chronicler with The Scaling Era

    TLDW (Too Long; Didn’t Watch)

    Dwarkesh Patel, a 24-year-old podcasting sensation, has made waves with his deep, unapologetically intellectual interviews on science, history, and technology. In a recent Core Memory Podcast episode hosted by Ashlee Vance, Patel announced his new book, The Scaling Era: An Oral History of AI, co-authored with Gavin Leech and published by Stripe Press. Released digitally on March 25, 2025, with a hardcover to follow in July, the book compiles insights from AI luminaries like Mark Zuckerberg and Satya Nadella, offering a vivid snapshot of the current AI revolution. Patel’s journey from a computer science student to a chronicler of the AI age, his optimistic vision for a future enriched by artificial intelligence, and his reflections on podcasting as a tool for learning and growth take center stage in this engaging conversation.


    At just 24, Dwarkesh Patel has carved out a unique niche in the crowded world of podcasting. Known for his probing interviews with scientists, historians, and tech pioneers, Patel refuses to pander to short attention spans, instead diving deep into complex topics with a gravitas that belies his age. On March 25, 2025, he joined Ashlee Vance on the Core Memory Podcast to discuss his life, his meteoric rise, and his latest venture: a book titled The Scaling Era: An Oral History of AI, published by Stripe Press. The episode, recorded in Patel’s San Francisco studio, offers a window into the mind of a young intellectual who’s become a key voice in documenting the AI revolution.

    Patel’s podcasting career began as a side project while he was a computer science student at the University of Texas. What started with interviews of economists like Bryan Caplan and Tyler Cowen has since expanded into a platform—the Lunar Society—that tackles everything from ancient DNA to military history. But it’s his focus on artificial intelligence that has garnered the most attention in recent years. Having interviewed the likes of Dario Amodei, Satya Nadella, and Mark Zuckerberg, Patel has positioned himself at the epicenter of the AI boom, capturing the thoughts of the field’s biggest players as large language models reshape the world.

    The Scaling Era, co-authored with Gavin Leech, is the culmination of these efforts. Released digitally on March 25, 2025, with a print edition slated for July, the book stitches together Patel’s interviews into a cohesive narrative, enriched with commentary, footnotes, and charts. It’s an oral history of what Patel calls the “scaling era”—the period where throwing more compute and data at AI models has yielded astonishing, often mysterious, leaps in capability. “It’s one of those things where afterwards, you can’t get the sense of how people were thinking about it at the time,” Patel told Vance, emphasizing the book’s value as a time capsule of this pivotal moment.

    The process of creating The Scaling Era was no small feat. Patel credits co-author Leech and editor Rebecca for helping weave disparate perspectives—from computer scientists to primatologists—into a unified story. The first chapter, for instance, explores why scaling works, drawing on insights from AI researchers, neuroscientists, and anthropologists. “Seeing all these snippets next to each other was a really fun experience,” Patel said, highlighting how the book connects dots he’d overlooked in his standalone interviews.

    Beyond the book, the podcast delves into Patel’s personal story. Born in India, he moved to the U.S. at age eight, bouncing between rural states like North Dakota and West Texas as his father, a doctor on an H1B visa, took jobs where domestic talent was scarce. A high school debate star—complete with a “chiseled chin” and concise extemp speeches—Patel initially saw himself heading toward a startup career, dabbling in ideas like furniture resale and a philosophy-inspired forum called PopperPlay (a name he later realized had unintended connotations). But it was podcasting that took off, transforming from a gap-year experiment into a full-fledged calling.

    Patel’s optimism about AI shines through in the conversation. He envisions a future where AI eliminates scarcity, not just of material goods but of experiences—think aesthetics, peak human moments, and interstellar exploration. “I’m a transhumanist,” he admitted, advocating for a world where humanity integrates with AI to unlock vast potential. He predicts AI task horizons doubling every seven months, potentially leading to “discontinuous” economic impacts within 18 months if models master computer use and reinforcement learning (RL) environments. Yet he remains skeptical of a “software-only singularity,” arguing that physical bottlenecks—like chip manufacturing—will temper the pace of progress, requiring a broader tech stack upgrade akin to building an iPhone in 1900.

    On the race to artificial general intelligence (AGI), Patel questions whether the first lab to get there will dominate indefinitely. He points to fast-follow dynamics—where breakthroughs are quickly replicated at lower cost—and the coalescing approaches of labs like xAI, OpenAI, and Anthropic. “The cost of training these models is declining like 10x a year,” he noted, suggesting a future where AGI becomes commodified rather than monopolized. He’s cautiously optimistic about safety, too, estimating a 10-20% “P(doom)” (probability of catastrophic outcomes) but arguing that current lab leaders are far better than alternatives like unchecked nationalized efforts or a reckless trillion-dollar GPU hoard.

    Patel’s influences—like economist Tyler Cowen, who mentored him early on—and unexpected podcast hits—like military historian Sarah Paine—round out the episode. Paine, a Naval War College scholar whose episodes with Patel have exploded in popularity, exemplifies his knack for spotlighting overlooked brilliance. “You really don’t know what’s going to be popular,” he mused, advocating for following personal curiosity over chasing trends.

    Looking ahead, Patel aims to make his podcast the go-to place for understanding the AI-driven “explosive growth” he sees coming. Writing, though a struggle, will play a bigger role as he refines his takes. “I want it to become the place where… you come to make sense of what’s going on,” he said. In a world often dominated by shallow content, Patel’s commitment to depth and learning stands out—a beacon for those who’d rather grapple with big ideas than scroll through 30-second blips.

  • The AI Revolution Unveiled: Jonathan Ross on Groq, NVIDIA, and the Future of Inference


    TL;DR

    Jonathan Ross, Groq’s CEO, predicts inference will eclipse training in AI’s future, with Groq’s Language Processing Units (LPUs) outpacing NVIDIA’s GPUs in cost and efficiency. He envisions synthetic data breaking scaling limits, a $1.5 billion Saudi revenue deal fueling Groq’s growth, and AI unlocking human potential through prompt engineering, though he warns of an overabundance trap.

    Detailed Summary

    In a captivating 20VC episode with Harry Stebbings, Jonathan Ross, the mastermind behind Groq and Google’s original Tensor Processing Unit (TPU), outlines a transformative vision for AI. Ross asserts that inference—deploying AI models in real-world scenarios—will soon overshadow training, challenging NVIDIA’s GPU stronghold. Groq’s LPUs, engineered for affordable, high-volume inference, deliver over five times the cost efficiency and three times the energy savings of NVIDIA’s training-focused GPUs by avoiding external memory like HBM. He champions synthetic data from advanced models as a breakthrough, dismantling scaling law barriers and redirecting focus to compute, data, and algorithmic bottlenecks.

    Groq’s explosive growth—from 640 chips in early 2024 to over 40,000 by year-end, aiming for 2 million in 2025—is propelled by a $1.5 billion Saudi revenue deal, not a funding round. Partners like Aramco fund the capital expenditure, sharing profits after a set return, liberating Groq from financial limits. Ross targets NVIDIA’s 40% inference revenue as a weak spot, cautions against a data center investment bubble driven by hyperscaler exaggeration, and foresees AI value concentrating among giants via a power law—yet Groq plans to join them by addressing unmet demands. Reflecting on Groq’s near-failure, salvaged by “Grok Bonds,” he dreams of AI enhancing human agency, potentially empowering 1.4 billion Africans through prompt engineering, while urging vigilance against settling for “good enough” in an abundant future.

    The Big Questions Raised—and Answered

    Ross’s insights provoke profound metaphorical questions about AI’s trajectory and humanity’s role. Here’s what the discussion implicitly asks, paired with his responses:

    • What happens when creation becomes so easy it redefines who gets to create?
      • Answer: Ross champions prompt engineering as a revolutionary force, turning speech into a tool that could unleash 1.4 billion African entrepreneurs. By making creation as simple as talking, AI could shift power from tech gatekeepers to the masses, sparking a global wave of innovation.
    • Can an underdog outrun a titan in a scale-driven game?
      • Answer: Groq can outpace NVIDIA, Ross asserts, by targeting inference—a massive, underserved market—rather than battling over training. With no HBM bottlenecks and a scalable Saudi-backed model, Groq’s agility could topple NVIDIA’s inference share, proving size isn’t everything.
    • What’s the human cost when machines replace our effort?
      • Answer: Ross likens LPUs to tireless employees, predicting a shift from labor to compute-driven economics. Yet, he warns of “financial diabetes”—a loss of drive in an AI-abundant world—urging us to preserve agency lest we become passive consumers of convenience.
    • Is the AI gold rush a promise or a pipe dream?
      • Answer: It’s both. Ross foresees billions wasted on overhyped data centers and “AI t-shirts,” but insists the total value created will outstrip losses. The winners, like Groq, will solve real problems, not chase fleeting trends.
    • How do we keep innovation’s spirit alive amid efficiency’s rise?
      • Answer: By prioritizing human agency and delegation—Ross’s “anti-founder mode”—over micromanagement, he says. Groq’s 25 million token-per-second coin aligns teams to innovate, not just optimize, ensuring efficiency amplifies creativity.
    • What’s the price of chasing a future that might not materialize?
      • Answer: Seven years of struggle taught Ross the emotional and financial toll is steep—Groq nearly died—but strategic bets (like inference) pay off when the wave hits. Resilience turns risk into reward.
    • Will AI’s pursuit drown us in wasted ambition?
      • Answer: Partially, yes—Ross cites VC’s “Keynesian Beauty Contest,” where cash floods copycats. But hyperscalers and problem-solvers like Groq will rise above the noise, turning ambition into tangible progress.
    • Can abundance liberate us without trapping us in ease?
      • Answer: Ross fears AI could erode striving, drawing from his boom-bust childhood. Prompt engineering offers liberation—empowering billions—but only if outliers reject “good enough” and push for excellence.

    Jonathan Ross’s vision is a clarion call: AI’s future isn’t just about faster chips or bigger models—it’s about who wields the tools and how they shape us. Groq’s battle with NVIDIA isn’t merely corporate; it’s a referendum on whether innovation can stay human-centric in an age of machine abundance. As Ross puts it, “Your job is to get positioned for the wave”—and he’s riding it, challenging us to paddle alongside or risk being left ashore.