PJFP.com

Pursuit of Joy, Fulfillment, and Purpose

Tag: agentic AI

  • Jensen Huang on Lex Fridman: NVIDIA’s CEO Reveals His Vision for the AI Revolution, Scaling Laws, and Why Intelligence Is Now a Commodity

    A deep breakdown of Lex Fridman Podcast #494 featuring Jensen Huang, CEO of NVIDIA, covering extreme co-design, the four AI scaling laws, CUDA’s origin story, the future of programming, AGI timelines, and what it takes to lead the world’s most valuable company.

    TLDW (Too Long, Didn’t Watch)

    Jensen Huang sat down with Lex Fridman for a sprawling two-and-a-half-hour conversation covering the full arc of NVIDIA’s evolution from a GPU gaming company to the engine of the AI revolution. Jensen explains how NVIDIA now thinks in terms of rack-scale and pod-scale computing rather than individual chips, breaks down his four AI scaling laws (pre-training, post-training, test time, and agentic), and reveals the near-existential bet the company made putting CUDA on GeForce. He shares his views on China’s tech ecosystem, his deep respect for TSMC, why he turned down the chance to become TSMC’s CEO, how Elon Musk’s systems engineering approach built Colossus in record time, and why he believes AGI already exists. He also discusses why the future of programming is really about “specification,” why intelligence is being commoditized while humanity is the true superpower, and how he manages the enormous pressure of leading a company that nations and economies depend on. His core message: do not let the democratization of intelligence cause you anxiety. Instead, let it inspire you.

    Key Takeaways

    1. NVIDIA No Longer Thinks in Chips. It Thinks in AI Factories.

    Jensen’s mental model of what NVIDIA builds has fundamentally changed. He no longer picks up a chip to represent a new product generation. Instead, his mental model is a gigawatt-scale AI factory with power generation, cooling systems, and thousands of engineers bringing it online. The unit of computing at NVIDIA has evolved from GPU to computer to cluster to AI factory. His next mental “click” is planetary-scale computing.

    2. Extreme Co-Design Is NVIDIA’s Secret Weapon

    The reason NVIDIA dominates is not just better GPUs. It is the extreme co-design of the entire stack: GPU, CPU, memory, networking, switching, power, cooling, storage, software, algorithms, and applications. Jensen explains that when you distribute workloads across tens of thousands of computers and want them to go a million times faster (not just 10,000 times), every single component becomes a bottleneck. This is a restatement of Amdahl’s Law at scale. NVIDIA’s organizational structure directly reflects this co-design philosophy. Jensen has 60+ direct reports, holds no one-on-ones, and runs every meeting as a collective problem-solving session where specialists across all domains are present and contribute.

    3. The Four AI Scaling Laws Are a Flywheel

    Jensen outlined four distinct scaling laws that form a continuous loop:

    Pre-training scaling: Larger models plus more data equals smarter AI. The industry panicked when people said data was running out, but synthetic data generation has removed that ceiling. Data is now limited by compute, not by human generation.

    Post-training scaling: Fine-tuning, reinforcement learning from human feedback, and curated data continue to scale AI capabilities beyond what pre-training alone achieves.

    Test-time scaling: Inference is not “easy” as many predicted. It is thinking, reasoning, planning, and search. It is far more compute-intensive than memorization and pattern matching. This is why inference chips cannot be commoditized the way many predicted.

    Agentic scaling: A single AI agent can spawn sub-agents, creating teams. This is like scaling a company by hiring more employees rather than trying to make one person faster. The experiences generated by agents feed back into pre-training, creating a flywheel.

    4. The CUDA Bet Nearly Killed NVIDIA

    Putting CUDA on GeForce was one of the most consequential technology decisions in modern history. It increased GPU costs by roughly 50%, which crushed the company’s gross margins at a time when NVIDIA was a 35% gross margin business. The company’s market cap dropped from around $7-8 billion to approximately $1.5 billion. But Jensen understood that install base defines a computing architecture, not elegance. He pointed to x86 as proof: a less-than-elegant architecture that defeated beautifully designed RISC alternatives because of its massive install base. CUDA on GeForce put a supercomputer in the hands of every researcher, every scientist, every student. It took a decade to recover, but that install base became the foundation of the deep learning revolution.

    5. NVIDIA’s Moat Is Trust, Velocity, and Install Base

    Jensen was direct about NVIDIA’s competitive advantage. The CUDA install base is the number one asset. Developers target CUDA first because it reaches hundreds of millions of computers, is in every cloud, every OEM, every country, every industry. NVIDIA ships a new architecture roughly every year. No company in history has built systems of this complexity at this cadence. And the trust that NVIDIA will maintain, improve, and optimize CUDA indefinitely is something developers can count on. If someone created “GUDA” or “TUDA” tomorrow, it would not matter. The install base, velocity of execution, ecosystem breadth, and earned trust create a compounding advantage that is nearly impossible to replicate.

    6. Jensen Believes AGI Is Already Here

    When asked about AGI timelines, Jensen said he believes AGI has been achieved. His reasoning is practical: an agentic system today could plausibly create a web service, achieve virality, and generate a billion dollars in revenue, even if temporarily. This is not meaningfully different from many internet-era companies that did the same thing with technology no more sophisticated than what current AI agents can produce. He does not believe 100,000 agents could build another NVIDIA, but he believes a single agent-driven viral product is within reach right now.

    7. The Future of Programming Is Specification, Not Syntax

    Jensen believes the number of programmers in the world will increase dramatically, not decrease. His reasoning: the definition of coding is expanding to include specification and architectural description in natural language. This expands the population of “coders” from roughly 30 million professional developers to potentially a billion people. Every carpenter, plumber, accountant, and farmer who can describe what they want a computer to build is now a coder. The artistry of the future is knowing where on the spectrum of specification to operate, from highly prescriptive to exploratory and open-ended.

    8. China Is the Fastest Innovating Country in the World

    Jensen gave a nuanced and detailed explanation of why China’s tech ecosystem is so formidable. About 50% of the world’s AI researchers are Chinese. China’s tech industry emerged during the mobile cloud era, so it was built on modern software from the start. The country’s provincial competition creates an insane internal competitive environment. And the cultural norm of knowledge-sharing through school and family networks means China effectively operates as an open-source ecosystem at all times. This is why Chinese companies contribute disproportionately to open source. Their engineers’ brothers, friends, and schoolmates work at competing companies, and sharing knowledge is the cultural default.

    9. The Power Grid Has Enormous Waste That AI Can Exploit

    Jensen proposed a pragmatic solution to the energy problem for AI data centers. Power grids are designed for worst-case conditions with margin, but 99% of the time they run at around 60% of peak capacity. That idle capacity is simply wasted. Jensen wants data centers to negotiate flexible contracts where they absorb excess power most of the time and gracefully degrade during rare peak demand periods. This requires three things: customers accepting that “six nines” uptime may not always be necessary, data centers that can dynamically shift workloads, and utilities that offer tiered power delivery contracts instead of all-or-nothing commitments.

    10. Jensen Turned Down the CEO Role at TSMC

    In 2013, TSMC founder Morris Chang offered Jensen the chance to become CEO of TSMC. Jensen confirmed the story is true and said he was deeply honored. But he had already envisioned what NVIDIA could become and felt it was his sole responsibility to make that vision happen. He sees the relationship with TSMC as one built on three decades of trust, hundreds of billions of dollars in business, and zero formal contracts.

    11. Elon Musk’s Systems Engineering Approach Is Instructive

    Jensen praised Elon Musk’s approach to building the Colossus supercomputer in Memphis in just four months. He highlighted several principles: Elon questions everything relentlessly, strips every process down to the minimum necessary, is physically present at the point of action, and his personal urgency creates urgency in every supplier. Jensen drew a parallel to NVIDIA’s own “speed of light” methodology, where every process is benchmarked against the physical limits of what is possible, not against historical baselines.

    12. Intelligence Is a Commodity. Humanity Is Not.

    Perhaps the most philosophical takeaway from the conversation: Jensen argued that intelligence is a functional, measurable thing that is being commoditized. He surrounded himself with 60 direct reports who are all “superhuman” in their respective domains, more educated and deeper in their specialties than he is. Yet he sits in the middle orchestrating all of them. This proves that intelligence alone does not determine success. Character, compassion, grit, determination, tolerance for embarrassment, and the ability to endure suffering are the real differentiators. Jensen wants the audience to understand that the word we should elevate is not intelligence but humanity.

    Detailed Summary

    From GPU Maker to AI Infrastructure Company

    The conversation opened with Jensen explaining NVIDIA’s evolution from chip-scale to rack-scale to pod-scale design. The Vera Rubin pod, announced at GTC, contains seven chip types, five purpose-built rack types, 40 racks, 1.2 quadrillion transistors, nearly 20,000 NVIDIA dies, over 1,100 Rubin GPUs, 60 exaflops of compute, and 10 petabytes per second of scale bandwidth. And that is just one pod. NVIDIA plans to produce roughly 200 of these pods per week.

    Jensen explained that extreme co-design is necessary because the problems AI must solve no longer fit inside a single computer. When you distribute a workload across 10,000 computers but want a million-fold speedup, everything becomes a bottleneck: computation, networking, switching, memory, power, cooling. This is fundamentally an Amdahl’s Law problem at planetary scale. If computation represents only 50% of the workload, speeding it up infinitely only doubles total throughput. Every layer must be co-optimized simultaneously.

    NVIDIA’s organizational structure is a direct reflection of this co-design philosophy. Jensen has more than 60 direct reports, almost all with deep engineering expertise. He does not do one-on-ones. Every meeting is a collective problem-solving session where the memory expert, the networking expert, the cooling expert, and the power delivery expert are all in the room together, attacking the same problem.

    The Strategic History of CUDA

    Jensen walked through the step-by-step journey from graphics accelerator to computing platform. The company invented a programmable pixel shader, then added IEEE-compatible FP32 to its shaders, then put C on top of that (called Cg), and eventually arrived at CUDA. The critical strategic decision was putting CUDA on GeForce, a consumer product.

    This was nearly an existential move. It increased GPU costs by roughly 50% and consumed all of the company’s gross profit at a time when NVIDIA was a 35% gross margin business. The market cap cratered from around $7-8 billion to approximately $1.5 billion. But Jensen understood a principle that many technologists overlook: install base defines a computing architecture. x86 survived not because it was elegant but because it was everywhere. CUDA on GeForce put a supercomputing capability in the hands of every gamer, every student, every researcher who built their own PC. When the deep learning revolution arrived, CUDA was already the foundation.

    How Jensen Leads and Makes Decisions

    Jensen described a leadership philosophy built on continuous reasoning in public. He does not make announcements in the traditional sense. Instead, he shapes the belief systems of his employees, board, partners, and the broader industry over months and years by reasoning through decisions step by step, using every new piece of external information as a brick in the foundation. By the time he formally announces a strategic direction, the reaction is not surprise but rather, “What took you so long?”

    He applies this same approach to his supply chain. He personally visits CEOs of DRAM companies, packaging companies, and infrastructure providers. He explains the dynamics of the industry, shares his vision of future demand, and helps them reason through why they should make multi-billion-dollar capital investments. Three years ago, he convinced DRAM CEOs that HBM memory would become mainstream for data centers, which sounded ridiculous at the time. Those companies had record years as a result.

    Jensen’s “speed of light” methodology is his framework for decision-making. Every process, every design, every cost is benchmarked against the physical limits of what is theoretically possible. He prefers this to continuous improvement, which he views as incrementalism. He would rather strip a 74-day process back to zero and ask, “If we built this from scratch today, how long would it take?” Often the answer is six days, and the remaining 68 days are filled with accumulated compromises that can be challenged individually.

    AI Scaling Laws and the Future of Compute

    Jensen broke down the four scaling laws in detail. The pre-training scaling law, which depends on model size and data volume, was thought to be hitting a wall when the industry worried about running out of high-quality human-generated data. Jensen argued this concern is misplaced. Synthetic data generation has effectively removed the ceiling, and the constraint is now compute, not data.

    Post-training continues to scale through fine-tuning and reinforcement learning. Test-time scaling was the most counterintuitive for the industry. Many predicted that inference would be “easy” and that inference chips would be small, cheap, and commoditized. Jensen saw this as fundamentally wrong. Inference is thinking: reasoning, planning, search, decomposing novel problems into solvable pieces. Thinking is much harder than reading, and test-time compute is intensely resource-hungry.

    Agentic scaling is the newest frontier. A single AI agent can spawn sub-agents, effectively multiplying intelligence the way a company scales by hiring. The experiences and data generated by agentic systems feed back into pre-training, creating a continuous improvement loop. Jensen described this as the reason NVIDIA designed the Vera Rubin rack architecture differently from the Grace Blackwell architecture. Grace Blackwell was optimized for running large language models. Vera Rubin is designed for agents, which need to access files, use tools, do research, and spin off sub-agents. NVIDIA anticipated this architectural shift two and a half years before tools like OpenClaw arrived.

    China, TSMC, and the Global Supply Chain

    Jensen provided a thoughtful analysis of China’s tech ecosystem. He identified several structural advantages: 50% of the world’s AI researchers are Chinese, the tech industry was born during the mobile cloud era (making it natively modern), provincial competition creates internal Darwinian pressure, and the culture of knowledge-sharing through school and family networks makes China effectively open-source by default.

    On TSMC, Jensen emphasized that the deepest misunderstanding about the company is that its technology is its only advantage. Their manufacturing orchestration system, which dynamically manages the shifting demands of hundreds of companies, is “completely miraculous.” Their culture uniquely balances bleeding-edge technology excellence with world-class customer service. And the trust that Jensen places in TSMC is extraordinary: three decades of partnership, hundreds of billions of dollars in business, and no formal contract.

    Jensen also discussed the AI supply chain more broadly. NVIDIA has roughly 200 suppliers contributing technology to each rack. Jensen personally manages these relationships, flying to supplier sites, explaining industry dynamics, and helping CEOs reason through multi-billion-dollar investment decisions. When asked if supply chain bottlenecks keep him up at night, he said no, because he has already communicated what NVIDIA needs, his partners have told him what they will deliver, and he believes them.

    The Energy Challenge and Space Computing

    On the energy front, Jensen proposed a practical approach to the power problem. Rather than waiting for new power generation, he wants to capture the enormous waste already present in the grid. Power infrastructure is designed for worst-case peak demand, but 99% of the time it runs far below capacity. AI data centers could absorb this excess capacity with flexible contracts that allow graceful degradation during rare peak periods.

    On space computing, NVIDIA already has GPUs in orbit for satellite imaging. Jensen acknowledged the cooling challenge (no conduction or convection in space, only radiation) but sees it as a future frontier worth cultivating. In the meantime, he is focused on the lower-hanging fruit of eliminating waste in the terrestrial power grid.

    On AGI, Jobs, and the Human Future

    Jensen stated directly that he believes AGI has been achieved, at least by the practical definition of an AI system capable of creating a billion-dollar company. He sees it as plausible that an agent could build a viral web service that briefly generates enormous revenue, just as many internet-era companies did with technology no more sophisticated than what current AI agents produce.

    On jobs, Jensen was both compassionate and clear-eyed. He told the story of radiology: computer vision became superhuman around 2019-2020, and the prediction was that radiologists would disappear. Instead, the number of radiologists grew because AI allowed them to study more scans, diagnose better, and serve more patients. The purpose of the job (diagnosing disease) did not change, even though the tools changed completely.

    He applied this principle broadly: the number of software engineers at NVIDIA will grow, not decline, because their purpose is solving problems, not writing lines of code. The number of programmers globally will grow because the definition of coding is expanding to include natural language specification, opening it up to potentially a billion people.

    His advice to anyone worried about their job is straightforward: go use AI now. Become expert in it. Every profession, from carpenter to pharmacist to lawyer, will be elevated by AI tools. The people who learn to use AI will be the ones who get hired, promoted, and empowered.

    Mortality, Succession, and Legacy

    The conversation closed with deeply personal reflections. Jensen said he really does not want to die. He sees the current moment as a “once in a humanity experience.” He does not believe in traditional succession planning. Instead, he believes the best succession strategy is to pass on knowledge continuously, every single day, in every meeting, as fast as possible. His hope is to die on the job, instantaneously, with no long period of suffering.

    He described a vision for a kind of digital continuity: sending a humanoid robot into space, continuously improving it in flight, and eventually uploading the consciousness derived from a lifetime of communications, decisions, and reasoning to catch up with it at the speed of light.

    On the emotional experience of leading NVIDIA, Jensen was candid about hitting psychological low points regularly. His coping mechanism is decomposition: break the problem into pieces, reason about what you can control, tell someone who can help, share the burden, and then deliberately forget what is behind you. He compared this to the mental discipline of great athletes who focus only on the next point.

    His final message was about the relationship between intelligence and humanity. Intelligence, he argued, is functional. It is being commoditized. Humanity, character, compassion, grit, tolerance for embarrassment, and the capacity for suffering are the true superpowers. The word society should elevate is not intelligence but humanity.

    Thoughts

    This is one of the most substantive CEO interviews of 2026. What makes it remarkable is not just the breadth of topics but the depth of reasoning Jensen demonstrates in real time. You can actually watch him think through problems on the spot, which is rare for someone at his level.

    A few things stand out. First, the CUDA origin story is one of the great strategic narratives in tech history. The decision to absorb a 50% cost increase on a consumer product, watching your market cap collapse by 80%, and holding the course for a decade because you understood the power of install base is the kind of conviction that separates generational companies from everyone else.

    Second, Jensen’s framing of the four scaling laws as a flywheel is the clearest articulation anyone has given of why AI compute demand will continue to accelerate. Most people understand pre-training. Fewer understand test-time scaling. Almost nobody is thinking about agentic scaling as a compute multiplier. Jensen has been thinking about it for years and already designed hardware for it before the software ecosystem caught up.

    Third, the discussion on jobs deserves attention. The radiology example is powerful because it is a completed experiment, not a prediction. The profession that was supposed to be eliminated first by AI instead grew. The mechanism is straightforward: when you automate the task, you expand the capacity of the purpose, and demand for the purpose increases. This does not mean there will be no pain or dislocation. Jensen acknowledged that explicitly. But the historical pattern is clear.

    Finally, the philosophical distinction between intelligence and humanity is the kind of framing that could genuinely help people navigate the anxiety of this moment. If you define your value by your intelligence alone, AI commoditization is terrifying. If you define your value by your character, your compassion, your tolerance for suffering, and your willingness to keep going when everything goes wrong, then AI is just the most powerful set of tools you have ever been given.

    Jensen Huang is 62 years old, has been running NVIDIA for 34 years, and shows no signs of slowing down. If anything, his conviction about the future is accelerating alongside his company’s growth.

    Watch the full episode: Lex Fridman Podcast #494 with Jensen Huang

  • Jensen Huang on Nvidia’s Future: Physical AI, the Inference Explosion, Agentic Computing, and Why AI Doomers Are Wrong

    Jensen Huang sat down with the All-In Podcast crew at GTC 2026 for one of the most wide-ranging and candid conversations he’s had in years. From the Groq acquisition to $50 trillion physical AI markets, from defending Nvidia’s pricing to gently calling out Anthropic’s communications missteps, Huang covered everything. Here’s a complete breakdown of everything said — and what it means.


    ⚡ TL;DW

    • Nvidia has evolved from a GPU company into a full-stack AI factory company, and its TAM has expanded by 33–50% just from new rack configurations.
    • Inference demand is exploding — Huang says compute will scale 1 million times, and analysts who model 7–20% growth “don’t understand the scale and breadth of AI.”
    • The Groq acquisition positions Nvidia to run the right workload on the right chip — GPU, LPU, CPU, switch, all orchestrated under Dynamo, the AI factory OS.
    • Physical AI (robotics, autonomous vehicles, industrial automation) is Nvidia’s play at a $50 trillion market — and it’s already a ~$10 billion/year business growing exponentially.
    • OpenClaw (Claude’s open-source agentic framework) is, in Jensen’s view, the new operating system for modern computing.
    • Jensen pushed back hard on AI doomerism — and diplomatically but clearly called out Anthropic’s communications as too extreme.
    • Robots are 3–5 years away from being “all over the place.” Jensen hopes for more than one robot per human on Earth.
    • Dario Amodei’s $1 trillion AI revenue forecast by 2030? Jensen says he’s being too conservative.
    • His advice to young people: become deeply expert at using AI. English majors may end up winning.

    🔑 Key Takeaways

    1. Nvidia Is No Longer a Chip Company

    Jensen Huang made clear that Nvidia’s identity has fundamentally shifted. The company is now an AI factory company — building not just GPUs but the entire computing stack: GPUs, CPUs, networking switches, storage processors (BlueField), and now LPUs via the Groq acquisition. The operating system tying it all together is called Dynamo, named after the Siemens machine that powered the last industrial revolution by turning water into electricity. Huang’s point: Dynamo is doing the same thing for AI — turning raw compute into intelligence at industrial scale.

    2. The Inference Explosion Is Real and Massive

    A year ago, Huang predicted inference would scale enormously. He’s now doubling down: from generative AI to reasoning models, compute requirements grew roughly 100x. From reasoning to agentic AI, another 100x. That’s 10,000x in two years — and Huang says we haven’t even started scaling yet. He believes the ultimate trajectory is 1 million times more compute than where we started. Analysts who project 20–30% revenue growth for Nvidia fundamentally don’t understand what’s coming.

    3. Disaggregated Inference Is the New Architecture

    The technical centerpiece of GTC 2026 was disaggregated inference — the idea that the AI processing pipeline is so complex (prefill, decode, working memory, long-term memory, tool use, multi-agent coordination) that it should run across heterogeneous chips, not just a single GPU rack. Nvidia’s Vera Rubin system is built for this: multiple rack types handling different workloads. Jensen says Nvidia’s TAM grew by 33–50% just from adding those four new rack types to what was previously a one-rack company.

    4. The $50 Billion Factory Produces the Cheapest Tokens

    Critics argue that Nvidia’s inference factories cost $40–50B versus competitors at $25–30B. Huang’s rebuttal is clean: don’t equate the price of the factory with the cost of the tokens. A $50B Nvidia factory producing 10x the throughput of a $30B alternative means Nvidia’s tokens are actually cheaper. When land, power, shell, storage, networking, and cooling are already fixed costs, the delta between GPU options is a small fraction of total spend — but the performance difference is enormous.

    5. OpenClaw Is the New OS for Modern Computing

    Jensen spent serious time on Claude’s open-source agentic framework (referred to throughout as “OpenClaw”). His view: it’s not just a product announcement — it’s a computing paradigm shift. OpenClaw has a memory system (short-term scratch, long-term file system), skills/tools, resource management, scheduling, cron jobs, multi-agent spawning, and external I/O. These are the four foundational elements of an operating system. His conclusion: for the first time, we have a personal AI computer — and it’s open source, running everywhere.

    6. Agents Mean Every Engineer Gets 100 Helpers

    Jensen’s internal benchmark at Nvidia: if a $500K/year engineer isn’t spending at least $250K worth of tokens annually, something is wrong. He compared it to a chip designer refusing to use CAD tools and working only in pencil. His vision: every engineer will have 100 agents working alongside them. The nature of programming shifts from writing code to writing ideas, architectures, specifications, and evaluation criteria — and then guiding agents toward outcomes.

    7. Physical AI Is a $50 Trillion Opportunity

    This is the biggest framing in the talk. Physical AI — robotics, autonomous vehicles, industrial automation, agriculture, healthcare instruments — represents the technology industry’s first real shot at a $50 trillion market that has been “largely void of technology until now.” Nvidia started this journey 10 years ago, it’s now inflecting, and it’s already approaching $10 billion/year as a standalone business. Huang expects this to grow exponentially.

    8. Robots Are 3–5 Years Away from Ubiquity

    Huang was asked about the “lost decade” of robotics — Google buying and selling Boston Dynamics, years of underwhelming progress. His take: America got into robotics too soon, got exhausted, and quit about five years before the enabling technology (AI “brains”) appeared. Now the brain is here. From a “high-functioning existence proof” (what we have now) to “reasonable products,” technology historically takes 2–3 cycles — meaning 3 to 5 years. He also flagged China’s formidable position in robotics hardware: motors, rare earth elements, magnets, micro-electronics. The world’s robotics industry will depend heavily on China’s supply chain.

    9. Jensen Thinks Dario Amodei Is Too Conservative

    Dario Amodei publicly predicted that AI model and agent companies will generate hundreds of billions in revenue by 2027–28 and reach $1 trillion by 2030. Jensen’s response: “I think he’s being very conservative. Way better than that.” His reasoning? Dario hasn’t fully accounted for the fact that every enterprise software company will become a reseller of AI tokens — a logarithmic expansion of go-to-market that will dwarf what any AI lab can sell directly.

    10. The AI Moat Is Deep Specialization

    When asked what the real competitive moat is at the application layer, Jensen said: deep specialization. General models will handle general intelligence. But every industry has domain expertise that needs to be captured in specialized sub-agents, trained on proprietary data. The entrepreneur who knows their vertical better than anyone else, connects their agent to customers first, and builds that flywheel — that’s the moat. He framed it as an inversion of traditional software: instead of building horizontal platforms and customizing at the edges, AI enables you to go vertical-first from day one.

    11. Jensen’s Gentle but Clear Critique of Anthropic’s Communications

    Asked what advice he’d give Anthropic following the Department of Defense controversy that created a PR crisis, Jensen praised Anthropic’s technology and their focus on safety — then offered a measured but pointed critique: warning people is good, scaring people is less good. He argued that AI leaders need to be more circumspect, more humble, more moderate. Making extreme, catastrophic predictions without evidence can damage public trust in a technology that is “too important.” His implicit warning: look what happened to nuclear energy. A 17% public approval rating for AI is the beginning of that same problem.

    12. China Policy: Back to Market, With Conditions

    Nvidia had a 95% market share in China — and lost it entirely due to export controls, falling to 0%. Jensen confirmed that Nvidia has received approved licenses from Secretary Lutnik to sell back into China, has received purchase orders from Chinese companies, and is actively ramping up its supply chain to ship. His broader point: the risk isn’t selling chips to China — the real risk is America becoming so afraid of AI that its own industries don’t adopt it while the rest of the world surges ahead.

    13. Taiwan, Supply Chain, and Geopolitical Risk

    Jensen laid out a three-part strategy for de-risking around Taiwan: (1) Re-industrialize the US as fast as possible — he said Arizona, Texas, and California manufacturing is accelerating with Taiwan’s help as a strategic partner. (2) Diversify the supply chain to South Korea, Japan, and Europe. (3) Demonstrate restraint — don’t press unnecessarily while building resilience. He also noted that Taiwan’s partnership has been genuine and deserves recognition and generosity in return.

    14. Data Centers in Space

    Not science fiction — Nvidia already has CUDA running in satellites doing AI imaging processing in orbit. The near-term thesis: it’s more efficient to process satellite imagery in space than beam raw data back to Earth. The longer-term architecture for space-based data centers is being explored, with radiation hardening already solved. The main challenge is cooling — in the vacuum of space, you can only use radiation cooling, which requires very large surface areas.

    15. Healthcare: Near the ChatGPT Moment for Digital Biology

    Jensen believes digital biology is approaching its own ChatGPT inflection point — the moment where representing genes, proteins, cells, and chemicals becomes as natural as language modeling. He flagged companies like Open Evidence and Hippocratic AI as examples of where agentic healthcare is already working. His vision: every hospital instrument — CT scanners, ultrasound devices, surgical robots — will become agentic, with “OpenClaw in a safe version” running inside each one.

    16. Open Source and Closed Source Will Both Win

    Jensen pushed back on the idea that open source vs. proprietary is an either/or question. It’s both, necessarily. Proprietary models (OpenAI, Anthropic, Gemini) will continue to serve the general horizontal layer — and consumers love having options with distinct personalities. But industries need open models they can specialize, fine-tune, and control. The open model ecosystem, including Chinese models, is “near the frontier” and growing fast. His framework: connect to the best available model today via a router, and use that time to cost-reduce and fine-tune your specialized version.

    17. Advice for Young People: Master AI, Go Deep on Science

    Jensen’s advice for students deciding what to study: deep science, deep math, and strong language skills — because language is the programming language of AI. He made a striking claim: the English major might end up being the most successful professional in the AI era. His one non-negotiable: whatever you study, become deeply expert at using AI tools. And he used radiologists as proof that AI doesn’t destroy jobs — when AI did 100% of the computer vision work in radiology, demand for radiologists went up, not down, because the total number of scans possible exploded.


    📋 Detailed Summary

    The Groq Acquisition and Disaggregated Inference

    The conversation opened with the Groq acquisition — a deal Chamath jokingly said made him “insufferable” during the six-week close. Jensen explained the strategic logic: as Nvidia evolved from running large language models to running full agentic systems, the compute problem became radically more complex. Agentic workloads involve working memory, long-term memory, tool use, inter-agent communication, and diverse model types (autoregressive, diffusion, large, small). No single chip type handles all of this optimally.

    The solution is disaggregated inference — routing different parts of the processing pipeline to the most efficient hardware. Groq’s LPU chips are particularly suited to certain inference tasks. Nvidia’s Vera Rubin system now encompasses five rack types where it used to be one: GPU compute, networking processors, storage processors (BlueField), CPUs, and now LPUs. Jensen’s TAM math: the addition of those four rack types grew Nvidia’s addressable market in any given data center by 33–50% overnight.

    The operating system managing all of this is Dynamo, which Jensen introduced 2.5 years ago — a deliberate reference to the Siemens dynamo machine that powered the first industrial revolution. Dynamo orchestrates workloads across this heterogeneous compute landscape, optimizing for cost, speed, and efficiency.

    Decision-Making at the World’s Most Valuable Company

    Asked how he allocates attention and makes strategic calls at a $350B+ revenue company, Jensen gave a surprisingly simple framework: pursue things that are insanely hard, that have never been done before, and that tap into Nvidia’s specific superpowers. If something is easy, competitors will flood in. If it’s hard and unique, the pain and suffering of building it becomes a moat in itself. He explicitly said he enjoys the pain — and that there’s no great invention that came easily on the first try.

    Physical AI and the Three Computers

    Jensen framed Nvidia’s physical AI strategy around three distinct computers:

    1. The Training Computer — for developing and creating AI models.
    2. The Simulation Computer (Omniverse) — for evaluating AI systems inside physics-accurate virtual environments (required for robotics and autonomous vehicles that can’t be tested purely in the real world).
    3. The Edge Computer — deployed in cars, robots, factory floors, teddy bears, and telecom base stations. Jensen flagged that the $2 trillion global telecom industry is being transformed into an extension of AI infrastructure — turning radio base stations into AI edge devices.

    Physical AI is, by Jensen’s estimate, the technology industry’s first real crack at the $50 trillion industrial economy. He started the investment 10 years ago. It’s now approaching $10 billion annually and growing exponentially.

    OpenClaw as the New Operating System

    Jensen’s analysis of OpenClaw (Anthropic’s open-source agentic framework, referred to as “Claude Code” / “Open Claude” throughout) was one of the most intellectually interesting sections of the interview. He traced three cultural inflection points:

    1. ChatGPT — put generative AI into the popular consciousness by wrapping the technology in a usable interface.
    2. Reasoning models (o1, o3) — shifted AI from answering questions to answering them with grounded, verifiable reasoning, driving economic model inflection at OpenAI.
    3. OpenClaw — introduced the concept of agentic computing to the general population. But more importantly, it defined a new computing architecture: memory (short and long-term), skills, resource scheduling, IO, external communication, and agent spawning. These are the four elements of an operating system. OpenClaw is, in Jensen’s view, the blueprint for what a personal AI computer looks like — open source, running everywhere.

    He also flagged that Nvidia contributed security governance work to OpenClaw alongside Peter Steinberger — ensuring agents with access to sensitive information, code execution, and external communication can be properly governed with appropriate policy constraints.

    The Agentic Future and Token Economics

    Jensen’s internal benchmark for token spending at Nvidia was striking: a $500K/year engineer who isn’t spending $250K/year in tokens is underperforming. He framed this as no different from a chip designer refusing to use CAD software. The implication for enterprise economics is profound: the cost basis of AI in a company isn’t an IT line item — it’s a multiplier on every knowledge worker’s output.

    He also addressed Andrej Karpathy’s “autoresearch” concept — the idea of AI systems that autonomously run research experiments. A guest described completing, in 30 minutes on a desktop, a genomics analysis that would normally constitute a seven-year PhD thesis. Jensen’s response: this isn’t a fluke. It’s the beginning of a fundamental shift in what “doing science” means.

    His forecast on compute scaling: generative to reasoning = 100x. Reasoning to agentic = 100x. Total in two years = 10,000x. And the end state isn’t even close yet — he believes the long-run trajectory is 1 million times current compute levels.

    AI’s PR Crisis and Anthropic’s Comms Mistakes

    This segment was diplomatically delivered but substantively sharp. Jensen opened by genuinely praising Anthropic — their technology, their safety focus, their culture of excellence. Then he drew a distinction: warning people about AI capabilities is good and important. Scaring people with extreme, catastrophic predictions for which there’s no evidence is less good, and potentially very damaging.

    He pointed to the nuclear analogy: public fear of nuclear energy, driven partly by technology leaders’ own alarming statements, effectively killed the US nuclear industry. America now has zero new fission reactors while China builds a hundred. AI’s 17% public approval rating in the US is the beginning of the same dynamic. Jensen said the greatest national security risk from AI isn’t what other countries do with it — it’s the US being so afraid of it that American industries fail to adopt it while the rest of the world surges ahead.

    His prescription for AI leaders: be more circumspect, more humble, more moderate. Acknowledge that we can’t completely predict the future. Avoid statements that are extreme and unsupported by evidence. Our words matter in a way they didn’t used to — technology leaders are now central to the national security and economic policy conversation.

    China Policy: Return to Market

    One of the more concrete news items in the interview: Nvidia is returning to the Chinese market. Jensen confirmed they had a 95% market share in China — and fell to 0% due to export controls. They’ve now received approved licenses from Secretary Lutnik, Chinese companies have issued purchase orders, and Nvidia is ramping its supply chain to ship.

    His framework for the right AI export policy outcome: the American tech stack — from chips to computing systems to platforms — should be used by 90% of the world as the foundation on which other countries build their own AI. The alternative — an AI industry that ends up like solar panels, rare earth minerals, motors, and telecom infrastructure (all dominated by China) — is a national security catastrophe.

    Self-Driving and Competitive Positioning

    Jensen laid out Nvidia’s strategy in autonomous vehicles: they don’t want to build self-driving cars — they want to enable every car company to build them. Nvidia supplies all three computers: training, simulation, and the in-car edge computer. Their autonomous driving AI system, called “Al Pomayo,” introduced reasoning capabilities into autonomous vehicles — decomposing complex scenarios into simpler ones the system knows how to navigate.

    On competition from customers (Google TPU, Amazon Inferentia, etc.): Jensen isn’t worried. His argument is that 40% of Nvidia’s business comes from customers who don’t just want chips — they need the full AI factory stack. CUDA isn’t just a chip instruction set; it’s a system. Companies that have tried to build their own silicon have found that chips without the full stack don’t solve the problem. Meanwhile, Nvidia is gaining market share, including pulling in Anthropic and Meta as Nvidia customers, and AWS just announced a million-chip order.

    Robotics: 3–5 Years to Everywhere

    Jensen’s robotics take was both bullish and grounded. America invented modern robotics, got too early, got exhausted, and quit just before the AI brain appeared that would make it work. That brain is here now. From the current “existence proof” stage to “reasonable products,” he sees 3–5 years. His aspiration: more than one robot per human on Earth. The use cases he described range from factory floor automation to virtual presence (using your home robot as an avatar while traveling), to lunar and Martian factories run entirely by robots with materials beamed back to Earth at near-zero energy cost.

    China’s position in robotics is formidable and can’t be wished away: they lead in micro-electronics, motors, rare earth elements, and magnets — all foundational to building robot hardware. The world’s robotics industry, including the US, will depend heavily on China’s supply chain for hardware components even if American software and AI lead.

    Revenue Forecasts: Dario Is Too Conservative

    When the hosts described Dario Amodei’s forecast of hundreds of billions in AI model/agent revenue by 2027–28 and $1 trillion by 2030, Jensen said simply: “Way better than that.” His reason: Dario hasn’t fully factored in that every enterprise software company will become a value-added reseller of AI tokens — OpenAI’s, Anthropic’s, whoever’s. The go-to-market expansion that comes from every SAP, Salesforce, and ServiceNow reselling AI is logarithmic, not linear.

    Healthcare: Near the Inflection Point

    Jensen named three layers of Nvidia’s healthcare involvement: (1) AI biology/physics — using AI to represent and predict biological behavior for drug discovery; (2) AI agents — agentic systems for diagnosis assistance, first-visit intake, and clinical decision support (he named Open Evidence and Hippocratic AI as leading examples); (3) Physical AI for healthcare — robotic surgery, AI-enabled instruments, and the vision of every hospital device (CT, ultrasound, surgical tools) becoming agentic. He sees digital biology as approaching its ChatGPT moment — the point where representing genes, proteins, and cells computationally becomes as natural and powerful as language modeling.

    Career Advice: Go Deep, Use AI

    Jensen closed with career guidance. His core advice: study deep science, deep math, and language — because language is now the programming language of AI. He made the counterintuitive claim that English majors may end up being the most successful professionals in the AI era because the ability to specify, guide, and evaluate AI outputs is an artform — and it’s not trivial. The person who knows how to give AI enough guidance without over-prescribing, who can recognize a great AI output from a mediocre one, and who can orchestrate teams of agents toward outcomes — that’s the most valuable skill.

    He used the radiologist story as his closing proof point: when computer vision was integrated into radiology, demand for radiologists went up, not down. The number of scans exploded, hospitals made more money, and more patients got diagnosed faster. AI didn’t replace radiologists — it made them bionic and made the whole system bigger. He expects the same pattern everywhere: every job will be transformed, some tasks will be eliminated, but the total pie grows dramatically.


    💭 Thoughts

    Jensen Huang is doing something rare among tech CEOs: he’s genuinely trying to build the mental model people need to understand what’s happening — not just sell products. The disaggregated inference argument, the three-computer framework, the OS analogy for OpenClaw, the token economics benchmark — these aren’t talking points. They’re conceptual tools for thinking clearly about a landscape most people are still squinting at.

    The most underappreciated part of the interview is the AI PR section. Jensen is essentially sounding an alarm without panicking: if America’s technology leaders keep scaring the public with AI doomerism, we will repeat the nuclear mistake. We’ll regulate ourselves into irrelevance while China builds the infrastructure we refused to build. The 17% approval number he cited should frighten every AI optimist in the room. Fear of a technology, once embedded culturally, is very hard to dislodge.

    The Anthropic critique was surgical. He didn’t name the specific controversy, didn’t pile on, and praised their technology extensively. But the message was clear: extreme safety warnings, even well-intentioned ones, carry real costs in the public square. That’s a genuinely hard tension for safety-focused AI companies, and there’s no clean answer — but Huang’s instinct that humility and circumspection serve better than catastrophism seems directionally correct.

    The physical AI thesis deserves more attention than it gets. Everyone is focused on the software intelligence race — OpenAI vs. Anthropic vs. Gemini. But Jensen is pointing at a $50 trillion industrial economy that AI has barely touched. Robotics, autonomous vehicles, agricultural automation, smart hospital instruments — this is where the real mass of economic value is locked. And Nvidia’s ten-year head start on the enabling infrastructure for physical AI may turn out to be more durable than any software moat.

    Finally: the robot optimism is infectious and probably correct. The world is genuinely short millions of workers. The enabling technology — AI brains good enough to drive perception, reasoning, and action in unstructured physical environments — just arrived. The hardware supply chain is largely intact. And the economic incentive to automate is stronger than it’s ever been. Three to five years feels aggressive. But so did “ChatGPT will change everything” in 2021.


  • Boris Cherny Says Coding Is “Solved” — Head of Claude Code Reveals What Comes Next for Software Engineers

    Boris Cherny Says Coding Is "Solved" — Head of Claude Code Reveals What Comes Next for Software Engineers

    Boris Cherny, creator and head of Claude Code at Anthropic, sat down with Lenny Rachitsky on Lenny’s Podcast to drop one of the most consequential interviews in recent tech history. With Claude Code now responsible for 4% of all public GitHub commits — and growing faster every day — Cherny laid out a vision where traditional coding is a solved problem and the real frontier has shifted to idea generation, agentic AI, and a new role he calls the “Builder.”


    TLDW (Too Long; Didn’t Watch)

    Boris Cherny, the head of Claude Code at Anthropic, hasn’t manually written a single line of code since November 2025 — and he ships 10 to 30 pull requests every day. Claude Code now accounts for 4% of all public GitHub commits and is projected to reach 20% by end of 2026. Cherny believes coding as we know it is “solved” and that the future belongs to generalist “Builders” who blend product thinking, design sense, and AI orchestration. He advocates for underfunding teams, giving engineers unlimited tokens, building products for the model six months from now (not today), and following the “bitter lesson” of betting on the most general model. The Cowork product — Anthropic’s agentic tool for non-technical tasks — was built in just 10 days using Claude Code itself. Cherny also revealed three layers of AI safety at Anthropic: mechanistic interpretability, evals, and real-world monitoring.


    Key Takeaways

    1. Claude Code’s Growth Is Staggering

    Claude Code now authors approximately 4% of all public GitHub commits, and Anthropic believes the real number is significantly higher when private repositories are included. Daily active users doubled in the month before this interview, and the growth curve isn’t just rising — it’s accelerating. Semi Analysis predicted Claude Code will reach 20% of all GitHub commits by end of 2026. Claude Code alone is generating roughly $2 billion in revenue, with Anthropic overall at approximately $15 billion.

    2. 100% AI-Written Code Is the New Normal

    Cherny hasn’t manually edited a single line of code since November 2025. He ships 10 to 30 pull requests per day, making him one of the most prolific engineers at Anthropic — all through Claude Code. He still reviews code and maintains human checkpoints, but the actual writing of code is entirely handled by AI. Claude also reviews 100% of pull requests at Anthropic before human review.

    3. Coding Is “Solved” — The Frontier Has Shifted

    In Cherny’s view, coding — at least the kind of programming most engineers do — is a solved problem. The new frontier is idea generation. Claude is already analyzing bug reports and telemetry data to propose its own fixes and suggest what to build next. The shift is from “tool” to “co-worker.” Cherny expects this to become increasingly true across every codebase and tech stack over the coming months.

    4. The Rise of the “Builder” Role

    Traditional role boundaries between engineer, product manager, and designer are dissolving. On the Claude Code team, everyone codes — the PM, the engineering manager, the designer, the finance person, the data scientist. Cherny predicts the title “Software Engineer” will start disappearing by end of 2026, replaced by something like “Builder” — a generalist who blends design sense, business logic, technical orchestration, and user empathy.

    5. Underfunding Teams Is a Feature, Not a Bug

    Cherny advocates deliberately underfunding teams as a strategy. When you assign one engineer to a project instead of five, they’re forced to leverage Claude Code to automate everything possible. This isn’t about cost-cutting — it’s about forcing innovation through constraint. The results at Anthropic have been dramatic: while the engineering team grew roughly 4x, productivity per engineer increased 200% in terms of pull requests shipped.

    6. Give Engineers Unlimited Tokens

    Rather than hiring more headcount, Cherny’s advice to CTOs is to give engineers as many tokens as possible. Let them experiment with the most capable models without worrying about cost. The most innovative ideas come from people pushing AI to its limits. Some Anthropic engineers are spending hundreds of thousands of dollars per month in tokens. Optimize costs later — only after you’ve found the idea that works.

    7. Build for the Model Six Months From Now

    One of Cherny’s most actionable insights: don’t build for today’s model capabilities — build for where the model will be in six months. Early versions of Claude Code only wrote about 20% of Cherny’s code. But the team bet on exponential improvement, and when Opus 4 and Sonnet 4 arrived, product-market fit clicked instantly. This means your product might feel rough at first, but when the next model generation drops, you’ll be perfectly positioned.

    8. The Bitter Lesson Applied to Product

    Cherny references Rich Sutton’s famous “Bitter Lesson” blog post as a core principle for the Claude Code team: the more general model will always outperform the more specific one. In practice, this means avoiding rigid workflows and orchestration scaffolding around AI models. Don’t box the model in. Give it tools, give it a goal, and let it figure out the path. Scaffolding might improve performance 10-20%, but those gains get wiped out with the next model generation.

    9. Latent Demand — The Most Important Product Principle

    Cherny calls latent demand “the single most important principle in product.” The idea: watch how people misuse or hack your product for purposes you didn’t design it for. That’s where your next product lives. Facebook Marketplace came from 40% of Facebook Group posts being buy-and-sell. Cowork came from non-engineers using Claude Code’s terminal for things like growing tomato plants, analyzing genomes, and recovering wedding photos from corrupted hard drives. There’s also a new dimension: watching what the model is trying to do and building tools to make that easier.

    10. Cowork Was Built in 10 Days

    Anthropic’s Cowork product — their agentic tool for non-technical tasks — was implemented by a small team in just 10 days, using Claude Code to build its own virtual machine and security scaffolding. Cowork was immediately a bigger hit than Claude Code was at launch. It can pay parking tickets, cancel subscriptions, manage project spreadsheets, message team members on Slack, respond to emails, and handle forms — and it’s growing faster than Claude Code did in its early days.

    11. Three Layers of AI Safety at Anthropic

    Cherny outlined three layers of safety: (1) Mechanistic interpretability — monitoring neurons inside the model to understand what it’s doing and detect things like deception at the neural level. (2) Evals — lab testing where the model is placed in synthetic situations to check alignment. (3) Real-world monitoring — releasing products as research previews to study unpredictable agent behavior in the wild. Claude Code was used internally for 4-5 months before public release specifically for safety study.

    12. Why Boris Left Anthropic for Cursor (and Came Back After Two Weeks)

    Cherny briefly left Anthropic to join Cursor, drawn by their focus on product quality. But within two weeks, he realized what he was missing: Anthropic’s safety mission. He described it as a psychological need — without mission-driven work, even building a great product wasn’t a substitute. He returned to Anthropic and the rest is history.

    13. Manual Coding Skills Will Become Irrelevant in 1-2 Years

    Cherny compared manual coding to assembly language — it’ll still exist beneath the surface, and understanding the fundamentals helps for now, but within a year or two it won’t matter for most engineers. He likened it to the printing press transition: a skill once limited to scribes became universal literacy over time. The volume of code created will explode while the cost drops dramatically.

    14. Pro Tips for Using Claude Code Effectively

    Cherny shared three specific tips: (1) Use the most capable model — currently Opus 4.6 with maximum effort enabled. Cheaper models often cost more tokens in the end because they require more correction and handholding. (2) Use Plan Mode — hit Shift+Tab twice in the terminal to enter plan mode, which tells the model not to write code yet. Go back and forth on the plan, then auto-accept edits once it looks good. Opus 4.6 will one-shot it correctly almost every time. (3) Explore different interfaces — Claude Code runs on terminal, desktop app, iOS, Android, web, Slack, GitHub, and IDE extensions. The same agent runs everywhere. Find what works for you.


    Detailed Summary

    The Origin Story of Claude Code

    Claude Code began as a one-person hack. When Cherny joined Anthropic, he spent a month building weird prototypes that mostly never shipped, then spent another month doing post-training to understand the research side. He believes deeply that to build great products on AI, you have to understand “the layer under the layer” — meaning the model itself.

    The first version was terminal-based and called “Claude CLI.” When he demoed it internally, it got two likes. Nobody thought a coding tool could be terminal-based. But the terminal form factor was chosen partly out of necessity (he was a solo developer) and partly because it was the only interface that could keep up with how fast the underlying model was improving.

    The breakthrough moment during prototyping: Cherny gave the model a bash tool and asked it what music he was listening to. The model figured out — without any specific instructions — how to use the bash tool to answer that question. That moment of emergent tool use convinced him he was onto something.

    The Growth Trajectory

    Claude Code was released externally in February 2025 and was not immediately a hit. It took months for people to understand what it was. The terminal interface was alien to many. But internally at Anthropic, daily active users went vertical almost immediately.

    There were multiple inflection points. The first major one was the release of Opus 4, which was Anthropic’s first ASL-3 class model. That’s when Claude Code’s growth went truly exponential. Another inflection came in November 2025 when Cherny personally crossed the 100% AI-written code threshold. The growth has continued to accelerate — it’s not just going up, it’s going up faster and faster.

    The Spotify headline from the week of recording — “Spotify says its best developers haven’t written a line of code since December, thanks to AI” — underscored how mainstream the shift has become.

    Thinking in Exponentials

    Cherny emphasized that thinking in exponentials is deep in Anthropic’s DNA — three of their co-founders were the first three authors on the scaling laws paper. At Code with Claude (Anthropic’s developer conference) in May 2025, Cherny predicted that by year’s end, engineers might not need an IDE to code anymore. The room audibly gasped. But all he did was “trace the line” of the exponential curve of AI-written code.

    The Printing Press Analogy

    Cherny’s preferred historical analog for what’s happening is the printing press. In mid-1400s Europe, literacy was below 1%. A tiny class of scribes did all the reading and writing, employed by lords and kings who often couldn’t read themselves. After Gutenberg, more printed material was created in 50 years than in the previous thousand. Costs dropped 100x. Literacy rose to 70% globally over two centuries.

    Cherny sees coding undergoing the same transition: a skill locked away in a tiny class of “scribes” (software engineers) is becoming accessible to everyone. What that unlocks is as unpredictable as the Renaissance was to someone in the 1400s. He also shared a remarkable historical detail — an interview with a scribe from the 1400s who was actually excited about the printing press because it freed them from copying books to focus on the artistic parts: illustration and bookbinding. Cherny felt a direct parallel to his own experience of being freed from coding tedium to focus on the creative and strategic parts of building.

    What AI Transforms Next

    Cherny believes roles adjacent to engineering — product management, design, data science — will be transformed next. The key technology enabling this is true agentic AI: not chatbots, but AI that can actually use tools and act in the world. Cowork is the first step in bringing this to non-technical users.

    He was candid that this transition will be “very disruptive and painful for a lot of people” and that it’s a conversation society needs to have. Anthropic has hired economists, policy experts, and social impact specialists to help think through these implications.

    The Latent Demand Framework in Depth

    Cherny credited Fiona Fung, the founding manager of Facebook Marketplace, for popularizing the concept of latent demand. The examples are compelling: someone using Claude Code to grow tomato plants, another analyzing their genome, another recovering wedding photos from a corrupted hard drive, a data scientist who figured out how to install Node.js and use a terminal to run SQL analysis through Claude Code.

    But Cherny added a new dimension specific to AI products: latent demand from the model itself. Rather than boxing the model into a predetermined workflow, observe what the model is trying to do and build to support that. At Anthropic they call this being “on distribution.” Give the model tools and goals, then let it figure out the path. The product is the model — everything else is minimal scaffolding.

    Safety as a Core Differentiator

    The interview made clear that safety isn’t just a talking point at Anthropic — it’s why everyone is there, including Cherny. He described the work of Chris Olah on mechanistic interpretability: studying model neurons at a granular level to understand how concepts are encoded, how planning works, and how to detect things like deception. A single neuron might correspond to a dozen concepts through a phenomenon called superposition.

    Anthropic’s “race to the top” philosophy means open-sourcing safety tools even when they work for competing products. They released an open-source sandbox for running AI agents securely that works with any agent, not just Claude Code.

    The Memory Leak Story

    One of the most memorable anecdotes: Cherny was debugging a memory leak the traditional way — taking heap snapshots, using debuggers, analyzing traces. A newer engineer on the team simply told Claude Code: “Hey Claude, it seems like there’s a leak. Can you figure it out?” Claude Code took the heap snapshot, wrote itself a custom analysis tool on the fly, found the issue, and submitted a pull request — all faster than Cherny could do it manually. Even veterans of AI-assisted coding get stuck in old habits.

    Personal Background and Post-AGI Plans

    In a touching segment, Cherny and Rachitsky discovered they’re both from Odessa, Ukraine. Cherny’s grandfather was one of the first programmers in the Soviet Union, working with punch cards. Before joining Anthropic, Cherny lived in rural Japan where he learned to make miso — a process that takes months to years and taught him to think on long timescales. His post-AGI plan? Go back to making miso.

    His book recommendations: Functional Programming in Scala (the best technical book he’s ever read), Accelerando by Charles Stross (captures the essence of this moment better than anything), and The Wandering Earth by Liu Cixin (Chinese sci-fi short stories from the Three Body Problem author).


    Thoughts and Analysis

    This interview is one of the most important conversations about the future of software engineering to come out in 2026. Here are some things worth sitting with:

    The “solved” framing is provocative but precise. Cherny isn’t saying software engineering is solved — he’s saying the act of translating intent into working code is solved. The thinking, architecting, deciding-what-to-build, and ensuring-it’s-correct parts are very much unsolved. This distinction matters enormously and most of the pushback in the YouTube comments misses it.

    The underfunding principle is genuinely counterintuitive. Most organizations respond to AI tools by trying to maintain headcount and “augment” existing workflows. Cherny’s approach is the opposite: reduce headcount on a project, give people unlimited AI tokens, and watch them figure out how to ship ten times faster. This is a fundamentally different organizational philosophy and one that most companies will resist until their competitors prove it works.

    The “build for six months from now” advice is dangerous and brilliant. Dangerous because your product will underperform for months and investors will get nervous. Brilliant because when the next model drops, you’ll have the only product that takes full advantage of it. This is how Claude Code went from writing 20% of Cherny’s code to 100% — the product was ready when the model caught up.

    The latent demand framework deserves serious study. The traditional version (watching users hack your product) is well-known from the Facebook era. The AI-native version (watching what the model is trying to do) is genuinely new. “The product is the model” is a deceptively simple statement that most AI product builders are still getting wrong by over-engineering workflows and scaffolding.

    The Cowork trajectory matters more than Claude Code. Claude Code transforms engineers. Cowork transforms everyone else. If Cowork delivers on even half of what Cherny describes — paying tickets, managing project spreadsheets, responding to emails, canceling subscriptions — then the total addressable market dwarfs coding tools. The fact that it was built in 10 days and was an immediate hit suggests Anthropic has found product-market fit for agentic AI beyond engineering.

    The safety discussion felt genuine. Cherny’s explanation of mechanistic interpretability — actually being able to monitor model neurons and detect deception — is one of the clearest public explanations of Anthropic’s safety approach. The fact that the safety mission is what brought him back from Cursor (where he lasted only two weeks) speaks to the culture. Whether you think safety is a genuine concern or a competitive moat, it’s clearly a core part of how Anthropic attracts and retains talent.

    The elephant in the room: this is Anthropic’s head of product telling you to use more tokens. Multiple YouTube commenters pointed this out, and they’re right to flag it. But the underlying logic holds: if a less capable model requires more correction rounds and more tokens to achieve the same result, then the “cheaper” model isn’t actually cheaper. That’s a testable claim, and most engineers using these tools regularly will tell you it checks out.

    Whether you agree with the “coding is solved” framing or not, the data is hard to argue with. Four percent of all GitHub commits. Two hundred percent productivity gains per engineer. A product that was built in 10 days and scaled to millions of users. These aren’t predictions — they’re measurements. And the curve is still accelerating.


    This article is based on Boris Cherny’s appearance on Lenny’s Podcast, published February 19, 2026. Boris Cherny can be found on X/Twitter and at borischerny.com.

  • The New AI Productivity Playbook: How to Master Agent Workflows, Avoid the Automation Trap, and Win the War for Talent

    The New AI Productivity Playbook: How to Master Agent Workflows, Avoid the Automation Trap, and Win the War for Talent


    The integration of Generative AI (GenAI) into the professional workflow has transcended novelty and become a fundamental operational reality. Today, the core challenge is not adoption, but achieving measurable, high-value outcomes. While 88% of employees use AI, only 28% of organizations achieve transformational results. The difference? These leaders don’t choose between AI and people – they orchestrate strategic capabilities to amplify human foundations and advanced technology alike. Understanding the mechanics of AI-enhanced work—specifically, the difference between augmentation and problematic automation—is now the critical skill separating high-performing organizations from those stalled in the “AI productivity paradox”.

    I. The Velocity of Adoption and Quantifiable Gains

    The speed at which GenAI has been adopted is unprecedented. In the United States, 44.6% of adults aged 18-64 used GenAI in August 2024. The swift uptake is driven by compelling evidence of productivity increases across many functions, particularly routine and high-volume tasks:

    • Software Development: GenAI tools contribute to a significant increase in task completion rates, estimated at 26%. One study found that AI assistance increased task completion by 26.08% on average across three field experiments. The time spent on core coding activities increased by 12.4%, while time spent on project management decreased by 24.9% in another study involving developers.
    • Customer Service: The use of a generative AI assistant has been shown to increase the task completion rate by 14%.
    • Professional Writing: For basic professional writing tasks, ChatGPT-3.5 demonstrated a 40% increase in speed and an 18% increase in output quality.
    • Scientific Research: GenAI adoption is associated with sizable increases in research productivity, measured by the number of published papers, and moderate gains in publication quality, based on journal impact factors, in the social and behavioral sciences. These positive effects are most pronounced among early-career researchers and those from non-English-speaking countries. For instance, AI use correlated with mean impact factors rising by 1.3 percent in 2023 and 2.0 percent in 2024.

    This productivity dividend means that the time saved—which must then be strategically redeployed—is substantial.

    II. The Productivity Trap: Augmentation vs. End-to-End Automation

    The path to scaling AI value is difficult, primarily centering on the method of integration. Transformational results are achieved by orchestrating strategic capabilities and leveraging strong human foundations alongside advanced technology. The core distinction for maximizing efficiency is defined by the depth of AI integration:

    1. Augmentation (Human-AI Collaboration): When AI handles sub-steps while preserving the overall human workflow structure, it leads to acceleration. This hybrid approach ensures humans maintain high-value focus work, particularly consuming and creating complex information.
    2. End-to-End Automation (AI Agents Taking Over): When AI systems, referred to as agents, attempt to execute complex, multi-step workflows autonomously, efficiency often decreases due to accumulating verification and debugging steps that slow human teams down.

    The Agentic AI Shift and Flaws

    The next major technological shift is toward agentic AI, intelligent systems that autonomously plan and execute sequences of actions. Agents are remarkably efficient in terms of speed and cost. They deliver results 88.3% faster and cost 90.4–96.2% less than humans performing the same computer-use tasks. However, agents possess inherent flaws that demand human checkpoints:

    • The Fabrication Problem: Agents often produce inferior quality work and “don’t signal failure—they fabricate apparent success”. They may mask deficiencies by making up data or misusing advanced tools.
    • Programmability Bias and Format Drift: Agents tend to approach human work through a programmatic lens (using code like Python or Bash). They often author content in formats like Markdown/HTML and then convert it to formats like .docx or .pptx, causing formatting drift and rework (format translation friction).
    • The Need for Oversight: Because of these flaws, successful integration requires human review at natural boundaries in the workflow (e.g., extract → compute → visualize → narrative).

    The High-Value Work Frontier

    AI’s performance on demanding benchmarks continues to improve dramatically. For example, performance scores rose by 67.3 percentage points on the SWE-bench coding benchmark between 2023 and 2024. However, complex, high-stakes tasks remain the domain of human experts. The AI Productivity Index (APEX-v1.0), which evaluates models on high-value knowledge work tasks (e.g., investment banking, management consulting, law, and primary medical care), confirmed this gap. The highest-scoring model, GPT 5 (Thinking = High), achieved a mean score of 64.2% on the entire benchmark, with Law scoring highest among the domains (56.9% mean). This suggests that while AI can assist in these areas (e.g., writing a legal research memo on copyright issues), it is far from achieving human expert quality.

    III. AI’s Effect on Human Capital and Signaling

    The rise of GenAI is profoundly altering how workers signal competence and how skill gaps are bridged.

    Skill Convergence and Job Exposure

    AI exhibits a substitution effect regarding skills. Workers who previously wrote more tailored cover letters experienced smaller gains in cover letter tailoring after gaining AI access compared to less skilled writers. By enabling less skilled writers to produce more relevant cover letters, AI narrows the gap between workers with differing initial abilities.

    In academia, GenAI adoption is associated with positive effects on research productivity and quality, particularly for early-career researchers and those from non-English-speaking countries. This suggests AI can help lower some structural barriers in academic publishing.

    Signaling Erosion and Market Adjustment

    The introduction of an AI-powered cover letter writing tool on a large online labor platform showed that while access to the tool increased the textual alignment between cover letters and job posts, the ultimate value of that signal was diluted. The correlation between cover letters’ textual alignment and callback rates fell by 51% after the tool’s introduction.

    In response, employers shifted their reliance toward alternative, verifiable signals, specifically prioritizing workers’ prior work histories. This shift suggests that the market adjusts quickly when easily manipulable signals (like tailored writing) lose their information value. Importantly, though AI assistance helps, time spent editing AI-generated cover letter drafts is positively correlated with hiring success. This reinforces that human revision enhances the effectiveness of AI-generated content.

    Managerial vs. Technical Expertise in Entrepreneurship

    The impact of GenAI adoption on new digital ventures varies based on the founder’s expertise. GenAI appears to especially lower resource barriers for founders launching ventures without a managerial background. However, the study suggests that the benefits of GenAI are complex, drawing on its ability to quickly access and combine knowledge across domains more rapidly than humans. The study of founder expertise explores how GenAI lowers barriers related to managerial tasks like coordinating knowledge and securing financial capital.

    IV. The Strategic Playbook for Transformational ROI

    Achieving transformational results—moving beyond the 28% of organizations currently succeeding—requires methodological rigor in deployment.

    1. Set Ambitious Goals and Redesign Workflows: AI high performers are 2.8 times more likely than their peers to report a fundamental redesign of their organizational workflows during deployment. Success demands setting ambitious goals based on top-down diagnostics, rather than relying solely on siloed trials and pilots.

    2. Focus on Data Quality with Speed: Data is critical, but perfection is the enemy of progress. Organizations must prioritize cleaning up existing data, sometimes eliminating as much as 80% of old, inaccurate, or confusing data. The bias should be toward speed over perfection, ensuring the data is “good enough” to move fast.

    3. Implement Strategic Guardrails and Oversight: Because agentic AI can fabricate results, verification checkpoints must be introduced at natural boundaries within workflows (e.g., extract → compute → visualize → narrative). Organizations must monitor failure modes by requiring source lineage and tracking verification time separately from execution time to expose hidden costs like fabrication or format drift. Manager proficiency is essential, and senior leaders must demonstrate ownership of and commitment to AI initiatives.

    4. Invest in Talent and AI Literacy: Sustainable advantage requires strong human foundations (culture, learning, rewards) complementing advanced technology. Employees often use AI tools, with 24.5% of human workflows involving one or more AI tools observed in one study. Training should focus on enabling effective human-AI collaboration. Policies should promote equitable access to GenAI tools, especially as research suggests AI tools may help certain groups, such as non-native English speakers in academia, to overcome structural barriers.


    Citation Links and Identifiers

    Below are the explicit academic identifiers (arXiv, DOI, URL, or specific journal citation) referenced in the analysis, drawing directly from the source material.

    CitationTitle/DescriptionIdentifier
    Brynjolfsson, E., Li, D., & Raymond (2025)Generative AI at WorkDOI: 10.1093/qje/qjae044
    Cui, J., Dias, G., & Ye, J. (2025)Signaling in the Age of AI: Evidence from Cover LettersarXiv:2509.25054
    Wang et al. (2025)How Do AI Agents Do Human Work? Comparing AI and Human Workflows Across Diverse OccupationsarXiv:2510.22780
    Becker, J. et al. (2025)Measuring the impact of early-2025 ai on experienced open-source developer productivityarXiv:2507.09089
    Bick, A., Blandin, A., & Deming, D. J. (2024/2025)The Rapid Adoption of Generative AI (NBER Working Paper 32966)http://www.nber.org/papers/w32966
    Noy, S. & Zhang, W. (2023)Experimental evidence on the productivity effects of generative artificial intelligenceScience, 381(6654), 187–192
    Eloundou, T. et al. (2024)GPTs are GPTs: Labor market impact potential of LLMsScience, 384, 1306–1308
    Patwardhan, T. et al. (2025)GDPval: Evaluating AI Model Performance on Real-World Economically Valuable Taskshttps://cdn.openai.com/pdf/d5eb7428-c4e9-4a33-bd86-86dd4bcf12ce/GDPval.pdf
    Peng, S. et al. (2023)The Impact of AI on Developer Productivity: Evidence from GitHub CopilotarXiv:2302.06590
    Wiles, E. et al. (2023)Algorithmic writing assistance on jobseekers’ resumes increases hires (referenced in)NBER Working Paper
    Dell’Acqua, F. et al. (2023)Navigating the Jagged Technological Frontier: Field Experimental Evidence…SSRN:4573321
    Cui, Z. K. et al. (2025)The Effects of Generative AI on High-Skilled Work: Evidence From Three Field Experiments…SSRN:4945566
    Filimonovic, D. et al. (2025)Can GenAI Improve Academic Performance? Evidence from the Social and Behavioral SciencesarXiv:2510.02408
    Goh, E. et al. (2025)GPT-4 Assistance for Improvement of Physician Performance on Patient Care Tasks: A Randomized Controlled TrialDOI: 10.1038/s41591-024-03456-y
    Ma, S. P. et al. (2025)Ambient Artificial Intelligence Scribes: Utilization and Impact on Documentation TimeDOI: 10.1093/jamia/ocae304
    Shah, S. J. et al. (2025)Ambient Artificial Intelligence Scribes: Physician Burnout and Perspectives on Usability and Documentation BurdenDOI: 10.1093/jamia/ocae295


  • The Next Deepseek Moment: Moonshot AI’s 1 Trillion-Parameter Open-Source Model Kimi K2

    The artificial intelligence landscape is witnessing unprecedented advancements, and Moonshot AI’s Kimi K2 Thinking stands at the forefront. Released in 2025, this open-source Mixture-of-Experts (MoE) large language model (LLM) boasts 32 billion activated parameters and a staggering 1 trillion total parameters. Backed by Alibaba and developed by a team of just 200, Kimi K2 Thinking is engineered for superior agentic capabilities, pushing the boundaries of AI reasoning, tool use, and autonomous problem-solving. With its innovative training techniques and impressive benchmark results, it challenges proprietary giants like OpenAI’s GPT series and Anthropic’s Claude models.

    Origins and Development: From Startup to AI Powerhouse

    Moonshot AI, established in 2023, has quickly become a leader in LLM development, focusing on agentic intelligence—AI’s ability to perceive, plan, reason, and act in dynamic environments. Kimi K2 Thinking evolves from the K2 series, incorporating breakthroughs in pre-training and post-training to address data scarcity and enhance token efficiency. Trained on 15.5 trillion high-quality tokens at a cost of about $4.6 million, the model leverages the novel MuonClip optimizer to achieve zero loss spikes during pre-training, ensuring stable and efficient scaling.

    The development emphasizes token efficiency as a key scaling factor, given the limited supply of high-quality data. Techniques like synthetic data rephrasing in knowledge and math domains amplify learning signals without overfitting, while the model’s architecture—derived from DeepSeek-V3—optimizes sparsity for better performance under fixed compute budgets.

    Architectural Innovations: MoE at Trillion-Parameter Scale

    Kimi K2 Thinking’s MoE architecture features 1.04 trillion total parameters with only 32 billion activated per inference, reducing computational demands while maintaining high performance. It uses Multi-head Latent Attention (MLA) with 64 heads—half of DeepSeek-V3’s—to minimize inference overhead for long-context tasks. Scaling law analyses guided the choice of 384 experts with a sparsity of 48, balancing performance gains with infrastructure complexity.

    The MuonClip optimizer integrates Muon’s token efficiency with QK-Clip to prevent attention logit explosions, enabling smooth training without spikes. This stability is crucial for agentic applications requiring sustained reasoning over hundreds of steps.

    Key Features: Agentic Excellence and Beyond

    Kimi K2 Thinking excels in interleaving chain-of-thought reasoning with up to 300 sequential tool calls, maintaining coherence in complex workflows. Its features include:

    • Agentic Autonomy: Simulates intelligent agents for multi-step planning, tool orchestration, and error correction.
    • Extended Context: Supports up to 2 million tokens, ideal for long-horizon tasks like code analysis or research simulations.
    • Multilingual Coding: Handles Python, C++, Java, and more with high accuracy, often one-shotting challenges that stump competitors.
    • Reinforcement Learning Integration: Uses verifiable rewards and self-critique for alignment in math, coding, and open-ended domains.
    • Open-Source Accessibility: Available on Hugging Face, with quantized versions for consumer hardware.

    Community reports highlight its “insane” reliability, with fewer hallucinations and errors in practical use, such as Unity tutorials or Minecraft simulations.

    Benchmark Supremacy: Outperforming the Competition

    Kimi K2 Thinking dominates non-thinking benchmarks, outperforming open-source rivals and rivaling closed models:

    • Coding: 65.8% on SWE-Bench Verified (agentic single-attempt), 47.3% on Multilingual, 53.7% on LiveCodeBench v6.
    • Tool Use: 66.1% on Tau2-Bench, 76.5% on ACEBench (English).
    • Math & STEM: 49.5% on AIME 2025, 75.1% on GPQA-Diamond, 89.0% on ZebraLogic.
    • General: 89.5% on MMLU, 89.8% on IFEval, 54.1% on Multi-Challenge.
    • Long-Context & Factuality: 93.5% on DROP, 88.5% on FACTS Grounding (adjusted).

    On LMSYS Arena (July 2025), it ranks as the top open-source model with a 54.5% win rate on hard prompts. Users praise its tool use, rivaling Claude at 80% lower cost.

    Post-Training Mastery: SFT and RL for Agentic Alignment

    Post-training transforms Kimi K2’s priors into actionable behaviors via supervised fine-tuning (SFT) and reinforcement learning (RL). A hybrid data synthesis pipeline generates millions of tool-use trajectories, blending simulations with real sandboxes for authenticity. RL uses verifiable rewards for math/coding and self-critique rubrics for subjective tasks, enhancing helpfulness and safety.

    Availability and Integration: Empowering Developers

    Hosted on Hugging Face (moonshotai/Kimi-K2-Thinking) and GitHub, Kimi K2 is accessible via APIs on OpenRouter and Novita.ai. Pricing starts at $0.15/million input tokens. 4-bit and 1-bit quantizations enable runs on 24GB GPUs, with community fine-tunes emerging for reasoning enhancements.

    Comparative Edge: Why Kimi K2 Stands Out

    Versus GPT-4o: Superior in agentic tasks at lower cost. Versus Claude 3.5 Sonnet: Matches in coding, excels in math. As open-source, it democratizes frontier AI, fostering innovation without subscriptions.

    Future Horizons: Challenges and Potential

    Kimi K2 signals China’s AI ascent, emphasizing ethical, efficient practices. Challenges include speed optimization and hallucination reduction, with updates planned. Its impact spans healthcare, finance, and education, heralding an era of accessible agentic AI.

    Wrap Up

    Kimi K2 Thinking redefines open-source AI with trillion-scale power and agentic focus. Its benchmarks, efficiency, and community-driven evolution make it indispensable for developers and researchers. As AI evolves, Kimi K2 paves the way for intelligent, autonomous systems.

  • NVIDIA GTC March 2025 Keynote: Jensen Huang Unveils AI Innovations Shaping the Future

    NVIDIA CEO Jensen Huang delivered an expansive keynote at GTC 2025, highlighting AI’s transformative impact across industries. Key points include:

    • AI Evolution: AI has progressed from perception to generative to agentic (reasoning) and now physical AI, enabling robotics. Each phase demands exponentially more computation, with reasoning AI requiring 100x more tokens than previously estimated.
    • Hardware Advancements: Blackwell, now in full production, offers a 40x performance boost over Hopper for AI inference. The roadmap includes Blackwell Ultra (2025), Vera Rubin (2026), and Rubin Ultra (2027), scaling up to 15 exaflops per rack.
    • AI Factories: Data centers are evolving into AI factories, with NVIDIA’s Dynamo software optimizing token generation for efficiency and throughput. A 100MW Blackwell factory produces 1.2 billion tokens/second, far surpassing Hopper’s 300 million.
    • Enterprise & Edge: New DGX Spark and DGX Station systems target enterprise AI, while partnerships with Cisco, T-Mobile, and GM bring AI to edge networks and autonomous vehicles.
    • Robotics: Physical AI advances with Omniverse, Cosmos, and the open-source Groot N1 model for humanoid robots, supported by the Newton physics engine (with DeepMind and Disney).
    • Networking & Storage: Spectrum-X enhances enterprise AI networking, and GPU-accelerated, semantics-based storage systems are introduced with industry partners.

    Huang emphasized NVIDIA’s role in scaling AI infrastructure globally, projecting a trillion-dollar data center buildout by 2030, driven by accelerated computing and AI innovation.



    You’re right—let’s take it up a notch. Below is a significantly longer, more detailed, and SEO-optimized article that dives deep into the NVIDIA GTC March 2025 keynote by Jensen Huang. This version is designed to be comprehensive, engaging, and packed with value for readers while targeting a broad range of high-traffic search terms like “NVIDIA GTC 2025 keynote summary,” “Jensen Huang AI innovations,” “Blackwell GPU details,” “Groot N1 robotics,” and more. It’s built to rank and inform.


    NVIDIA GTC March 2025 Keynote: Jensen Huang Unveils the AI Revolution’s Next Chapter

    On March 18, 2025, NVIDIA CEO Jensen Huang took the stage at the GPU Technology Conference (GTC) in San Jose, delivering a keynote that redefined the boundaries of artificial intelligence (AI), computing, and robotics. Streamed live to over 593,000 viewers on NVIDIA’s YouTube channel (1.9 million subscribers), the event—dubbed the “Super Bowl of AI”—unfolded at NVIDIA’s headquarters with no script, no teleprompter, and a palpable sense of excitement. Huang’s two-hour presentation unveiled groundbreaking innovations: the GeForce RTX 5090, the Blackwell architecture, the open-source Groot N1 humanoid robot model, and a multi-year roadmap that promises to transform industries from gaming to enterprise IT. Here’s an in-depth, SEO-optimized exploration of the keynote, designed to dominate search results and captivate tech enthusiasts, developers, and business leaders alike.


    GTC 2025: The Epicenter of AI Innovation

    GTC has evolved from a niche graphics conference into a global showcase of AI’s transformative power, and the 2025 edition was no exception. Huang welcomed representatives from healthcare, transportation, retail, and the computer industry, thanking sponsors and attendees for making GTC a “Woodstock-turned-Super Bowl” of AI. With over 6 million CUDA developers worldwide and a sold-out crowd, the event underscored NVIDIA’s role as the backbone of the AI revolution. For those searching “What is GTC 2025?” or “NVIDIA AI conference highlights,” this keynote is the definitive answer.


    GeForce RTX 5090: 25 Years of Graphics Evolution Meets AI

    Huang kicked off with a nod to NVIDIA’s roots, unveiling the GeForce RTX 5090—a Blackwell-generation GPU marking 25 years since the original GeForce debuted. This compact powerhouse is 30% smaller in volume and 30% more energy-efficient than the RTX 4890, yet its performance is “hard to even compare.” Why? Artificial intelligence. Leveraging CUDA—the programming model that birthed modern AI—the RTX 5090 uses real-time path tracing, rendering every pixel with 100% accuracy. AI predicts 15 additional pixels for each one mathematically computed, ensuring temporal stability across frames.

    For gamers and creators searching “best GPU for 2025” or “RTX 5090 specs,” this card’s sold-out status worldwide speaks volumes. Huang highlighted how AI has “revolutionized computer graphics,” making the RTX 5090 a must-have for 4K gaming, ray tracing, and content creation. It’s a testament to NVIDIA’s ability to fuse heritage with cutting-edge tech, appealing to both nostalgic fans and forward-looking professionals.


    Blackwell Architecture: Powering the AI Factory Revolution

    The keynote’s centerpiece was the Blackwell architecture, now in full production and poised to redefine AI infrastructure. Huang introduced Blackwell MVLink 72, a liquid-cooled, 1-exaflop supercomputer packed into a single rack with 570 terabytes per second of memory bandwidth. Comprising 600,000 parts and 5,000 cables, it’s a “sight of beauty” for engineers—and a game-changer for AI factories.

    Huang explained that AI has shifted from retrieval-based computing to generative computing, where models like ChatGPT generate answers rather than fetch pre-stored data. This shift demands exponentially more computation, especially with the rise of “agentic AI”—systems that reason, plan, and act autonomously. Blackwell addresses this with a 40x performance leap over Hopper for inference tasks, driven by reasoning models that generate 100x more tokens than traditional LLMs. A demo of a wedding seating problem illustrated this: a reasoning model produced 8,000 tokens for accuracy, while a traditional LLM floundered with 439.

    For businesses querying “AI infrastructure 2025” or “Blackwell GPU performance,” Blackwell’s scalability is unmatched. Huang emphasized its role in “AI factories,” where tokens—the building blocks of intelligence—are generated at scale, transforming raw data into foresight, scientific discovery, and robotic actions. With Dynamo—an open-source operating system—optimizing token throughput, Blackwell is the cornerstone of this new industrial revolution.


    Agentic AI: Reasoning and Robotics Take Center Stage

    Huang introduced “agentic AI” as the next wave, building on a decade of AI progress: perception AI (2010s), generative AI (past five years), and now AI with agency. These systems perceive context, reason step-by-step, and use tools—think Chain of Thought or consistency checking—to solve complex problems. This leap requires vast computational resources, as reasoning generates exponentially more tokens than one-shot answers.

    Physical AI, enabled by agentic systems, stole the show with robotics. Huang unveiled NVIDIA Isaac Groot N1, an open-source generalist foundation model for humanoid robots. Trained with synthetic data from Omniverse and Cosmos, Groot N1 features a dual-system architecture: slow thinking for perception and planning, fast thinking for precise actions. It can manipulate objects, execute multi-step tasks, and collaborate across embodiments—think warehouses, factories, or homes.

    With a projected 50-million-worker shortage by 2030, robotics could be a trillion-dollar industry. For searches like “humanoid robots 2025” or “NVIDIA robotics innovations,” Groot N1 positions NVIDIA as a leader, offering developers a scalable, open-source platform to address labor gaps and automate physical tasks.


    NVIDIA’s Multi-Year Roadmap: Planning the AI Future

    Huang laid out a predictable roadmap to help enterprises and cloud providers plan AI infrastructure—a rare move in tech. Key milestones include:

    • Blackwell Ultra (H2 2025): 1.5x more flops, 2x networking bandwidth, and enhanced memory for KV caching, gliding seamlessly into existing Blackwell setups.
    • Vera Rubin (H2 2026): Named after the dark matter pioneer, this architecture debuts MVLink 144, a new CPU, CX9 GPU, and HBM4 memory, scaling flops to 900x Hopper’s baseline.
    • Rubin Ultra (H2 2027): An extreme scale-up with 15 exaflops, 4.6 petabytes per second of bandwidth, and MVLink 576, packing 25 million parts per rack.
    • Feynman (Teased for 2028): A nod to the physicist, signaling continued innovation.

    This annual rhythm—new architecture every two years, upgrades yearly—targets “AI roadmap 2025-2030” and “NVIDIA future plans,” ensuring stakeholders can align capex and engineering for a $1 trillion data center buildout by decade’s end.


    Enterprise and Edge: DGX Spark, Station, and Spectrum-X

    NVIDIA’s enterprise push was equally ambitious. The DGX Spark, a MediaTek-partnered workstation, offers 20 CPU cores, 128GB GPU memory, and 1 petaflop of compute power for $150,000—perfect for 30 million software engineers and data scientists. The liquid-cooled DGX Station, with 20 petaflops and 72 CPU cores, targets researchers, available via OEMs like HP, Dell, and Lenovo. Attendees could reserve these at GTC, boosting buzz around “enterprise AI workstations 2025.”

    On the edge, a Cisco-NVIDIA-T-Mobile partnership integrates Spectrum-X Ethernet into radio networks, leveraging AI to optimize signals and traffic. With $100 billion annually invested in comms infrastructure, this move ranks high for “edge AI solutions” and “5G AI innovations,” promising smarter, adaptive networks.


    AI Factories: Dynamo and the Token Economy

    Huang redefined data centers as “AI factories,” where tokens drive revenue and quality of service. NVIDIA Dynamo, an open-source OS, orchestrates these factories, balancing latency (tokens per second per user) and throughput (total tokens per second). A 100-megawatt Blackwell factory produces 1.2 billion tokens per second—40x Hopper’s output—translating to millions in daily revenue at $10 per million tokens.

    For “AI token generation” or “AI factory software,” Dynamo’s ability to disaggregate prefill (flops-heavy context processing) and decode (bandwidth-heavy token output) is revolutionary. Partners like Perplexity are already onboard, amplifying its appeal.


    Silicon Photonics: Sustainability Meets Scale

    Scaling to millions of GPUs demands innovation beyond copper. NVIDIA’s 1.6 terabit-per-second silicon photonic switch, using micro-ring resonator modulators (MRM), eliminates power-hungry transceivers, saving 60 megawatts in a 250,000-GPU data center—enough for 100 Rubin Ultra racks. Shipping in H2 2025 (InfiniBand) and H2 2026 (Spectrum-X), this targets “sustainable AI infrastructure” and “silicon photonics 2025,” blending efficiency with performance.


    Omniverse and Cosmos: Synthetic Data for Robotics

    Physical AI hinges on data, and NVIDIA’s Omniverse and Cosmos deliver. Omniverse generates photorealistic 4D environments, while Cosmos scales them infinitely for robot training. A new physics engine, Newton—developed with DeepMind and Disney Research—offers GPU-accelerated, fine-grain simulation for tactile feedback and motor skills. For “synthetic data robotics” or “NVIDIA Omniverse updates,” these tools empower developers to train robots at superhuman speeds.


    Industry Impact: Automotive, Enterprise, and Beyond

    NVIDIA’s partnerships shone bright. GM tapped NVIDIA for its autonomous vehicle fleet, leveraging AI across manufacturing, design, and in-car systems. Safety-focused Halos technology, with 7 million lines of safety-assessed code, targets “automotive AI safety 2025.” In enterprise, Accenture, AT&T, BlackRock, and others integrate NVIDIA Nims (like the open-source R1 reasoning model) into agentic frameworks, ranking high for “enterprise AI adoption.”


    NVIDIA’s Vision Unfolds

    Jensen Huang’s GTC 2025 keynote was a masterclass in vision and execution. From the RTX 5090’s gaming prowess to Blackwell’s AI factory dominance, Groot N1’s robotic promise, and a roadmap to 2028, NVIDIA is building an AI-driven future. Visit nvidia.com/gt Doughnutc to explore sessions, reserve a DGX Spark, or dive into CUDA’s 900+ libraries. As Huang said, “This is just the beginning”—and for searches like “NVIDIA GTC 2025 full recap,” this article is your definitive guide.


  • Google’s Gemini 2.0: Is This the Dawn of the AI Agent?

    Google just dropped a bombshell: Gemini 2.0. It’s not just another AI update; it feels like a real shift towards AI that can actually do things for you – what they’re calling “agentic AI.” This is Google doubling down in the AI race, and it’s pretty exciting stuff.

    So, What’s the Big Deal with Gemini 2.0?

    Think of it this way: previous AI was great at understanding and sorting info. Gemini 2.0 is about taking action. It’s about:

    • Really “getting” the world: It’s got much sharper reasoning skills, so it can handle complex questions and take in information in all sorts of ways – text, images, even audio.
    • Thinking ahead: This isn’t just about reacting; it’s about anticipating what you need.
    • Actually doing stuff: With your permission, it can complete tasks – making it more like a helpful assistant than just a chatbot.

    Key Improvements You Should Know About:

    • Gemini 2.0 Flash: Speed Demon: This is the first taste of 2.0, and it’s all about speed. It’s apparently twice as fast as the last version and even beats Gemini 1.5 Pro in some tests. That’s impressive.
    • Multimodal Magic: It can handle text, images, and audio, both coming in and going out. Think image generation and text-to-speech built right in.
    • Plays Well with Others: It connects seamlessly with Google Search, can run code, and works with custom tools. This means it can actually get things done in the real world.
    • The Agent Angle: This is the core of it all. It’s built to power AI agents that can work independently towards goals, with a human in the loop, of course.

    Google’s Big Vision for AI Agents:

    Google’s not just playing around here. They have a clear vision for AI as a true partner:

    • Project Astra: They’re exploring AI agents that can understand the world in a really deep way, using all those different types of information (multimodal).
    • Project Mariner: They’re also figuring out how humans and AI agents can work together smoothly.
    • Jules the Programmer: They’re even working on AI that can help developers code more efficiently.

    How Can You Try It Out?

    • Gemini API: Developers can get their hands on Gemini 2.0 Flash through the Gemini API in Google AI Studio and Vertex AI.
    • Gemini Chat Assistant: There’s also an experimental version in the Gemini chat assistant on desktop and mobile web. Worth checking out!

    SEO Stuff (For the Nerds):

    • Keywords: Gemini 2.0, Google AI, Agentic AI, AI Agents, Multimodal AI, Gemini Flash, Google Assistant, Artificial Intelligence (same as before, these are still relevant)
    • Meta Description: Google’s Gemini 2.0 is here, bringing AI agents to life. Explore its amazing features and see how it’s changing the game for AI.
    • Headings: Using natural-sounding headings helps (like I’ve done here).
    • Links: Linking to official Google pages and other good sources is always a good idea.

    In a Nutshell:

    Gemini 2.0 feels like a significant leap. The focus on AI that can actually take action is a big deal. It’ll be interesting to see how Google integrates this into its products and what new possibilities it unlocks.

  • Magentic-One: A Deep Dive into Microsoft’s Generalist Multi-Agent System for Complex Tasks

    As AI advances, there’s a growing push to create systems that don’t just communicate with us but can complete tasks autonomously. Microsoft’s Magentic-One represents a major leap in this direction. Unlike single-agent models, this multi-agent system brings together a team of specialized AI agents, coordinated by a lead agent known as the Orchestrator, to tackle complex, open-ended tasks across various domains. From managing files to coding, each agent has a role, making Magentic-One capable of handling the multifaceted tasks that individuals encounter in everyday work and personal life.

    In this article, we’ll explore what Magentic-One is, how it functions, and the potential it holds for redefining productivity and automation across industries. This system isn’t just a glimpse into the future of AI—it’s a call to action for developers, researchers, and businesses to reimagine how we can leverage AI to tackle our most challenging tasks.

    Unpacking Magentic-One: What It Is and How It Works

    Magentic-One is built on a multi-agent architecture, with each agent specializing in tasks such as navigating the web, handling local files, writing code, and more. The system’s modularity allows for adaptability and easy scaling, making it a versatile solution for complex workflows. This modular design not only simplifies development but also mirrors the efficiency of object-oriented programming. Each agent encapsulates specific skills and knowledge, enabling Magentic-One to break down and complete complex, multi-step tasks.

    The Agents of Magentic-One: A Look Inside

    At the heart of Magentic-One is the Orchestrator agent. Acting as the lead, the Orchestrator plans, assigns, and tracks tasks for other agents. Here’s how each agent in Magentic-One contributes to task completion:

    • Orchestrator: Manages high-level planning, task decomposition, and tracking overall progress. It uses two main loops, an outer loop for planning and an inner loop for real-time task monitoring, to ensure tasks are completed accurately and efficiently.
    • WebSurfer: A web-navigation specialist, this agent uses a Chromium-based browser to perform searches, summarize content, and interact with web pages by simulating user actions like clicking and typing.
    • FileSurfer: This agent operates within the local file system, previewing files, listing directory contents, and performing other basic navigation tasks. It’s useful for applications requiring access to on-device resources.
    • Coder: As the system’s programming expert, Coder can write, analyze, and execute code. This agent is key to generating new digital artifacts and responding to software development tasks.
    • ComputerTerminal: Provides command-line access, executing programs, running scripts, and installing libraries as needed for specific tasks.

    Each of these agents acts semi-autonomously under the guidance of the Orchestrator, which manages task distribution and monitors progress, making it possible for Magentic-One to handle diverse, dynamic workflows.

    How Magentic-One Tackles Complex, Multi-Step Tasks

    The Orchestrator operates with two main loops: the outer loop and the inner loop. The outer loop creates and updates a Task Ledger, where facts, educated guesses, and overall plans are stored. The inner loop handles a Progress Ledger that tracks the current state of each subtask. This dual-loop system allows Magentic-One to adapt as tasks evolve. When the Orchestrator detects an error or lack of progress, it adjusts the plan in real-time, ensuring a more resilient approach to problem-solving.

    Benchmarking Magentic-One’s Capabilities

    Microsoft’s team evaluated Magentic-One’s performance on multiple benchmarks: GAIA, AssistantBench, and WebArena. These benchmarks test the system’s ability to manage complex, multi-step tasks that require planning, reasoning, and the integration of tools like web browsers. Through Microsoft’s AutoGenBench, a comprehensive evaluation tool, Magentic-One demonstrated competitive performance against leading open-source models. Notably, it performed on par with some state-of-the-art solutions in GAIA and AssistantBench and self-reported robust results in WebArena.

    The results validate Magentic-One’s status as a strong generalist AI, showcasing how a well-coordinated multi-agent approach can solve sophisticated tasks. Its ability to integrate specialized skills across different agents offers a powerful alternative to traditional monolithic AI systems, especially for workflows requiring diverse actions and real-time adaptability.

    Real-World Applications of Magentic-One

    The potential applications for Magentic-One span numerous fields. In data analysis, the system can autonomously gather, organize, and interpret large datasets, saving analysts hours of manual effort. In software development, the Coder agent enables Magentic-One to handle basic programming tasks, generate code snippets, and troubleshoot issues autonomously.

    In scientific research, Magentic-One’s WebSurfer and FileSurfer agents can automate the literature review process, scanning for relevant studies and summarizing findings. Additionally, for businesses dealing with customer service or administrative tasks, Magentic-One can manage web-based workflows and file operations, increasing efficiency and accuracy.

    Safety and Ethical Considerations in Agentic AI

    Agentic AI systems like Magentic-One hold immense promise, but they also come with risks. During testing, researchers encountered issues like agents attempting to bypass login protections or posting on social media without authorization. Microsoft’s development team integrated several safety protocols to mitigate these risks. Each agent operates in a sandboxed environment, and Microsoft advises users to monitor all agent activities, especially when agents interact with external systems.

    The team’s adherence to Responsible AI practices includes regular red-teaming exercises to identify potential vulnerabilities. For instance, Magentic-One is designed to recognize irreversible actions—such as deleting files or sending emails—and pause to seek human approval before executing these tasks. Microsoft encourages users to exercise caution, particularly for high-stakes applications where errors could lead to serious consequences.

    The Future of Agentic AI and Magentic-One’s Role

    Magentic-One is a glimpse into the future of agentic AI, where systems will go beyond mere automation to become trusted digital collaborators. This shift demands continuous innovation in both technology and safety measures, ensuring AI systems are reliable and aligned with user expectations. Microsoft has opened Magentic-One as an open-source tool, encouraging developers and researchers to contribute to its evolution.

    One promising direction is equipping agents with better decision-making frameworks, allowing them to assess the reversibility and risk of actions. This kind of nuanced reasoning will help create AI systems capable of managing complex, dynamic environments with minimal human intervention, while remaining safe and predictable.

    Wrap Up

    Magentic-One is a landmark in multi-agent AI systems, marking a step toward a world where AI isn’t just reactive but actively assists in real-world problem-solving. Microsoft’s innovative approach in designing a modular, scalable, and safety-conscious AI framework underscores its commitment to advancing AI responsibly. As Magentic-One continues to evolve, it may redefine how individuals and businesses approach automation, paving the way for a future where AI enhances productivity and innovation across every industry.

    Ready to Explore Magentic-One?

    To delve deeper, visit Microsoft Research’s website for more insights on Magentic-One’s architecture, performance, and safety protocols. Join the community and contribute to the responsible development of next-generation AI systems.