PJFP.com

Pursuit of Joy, Fulfillment, and Purpose

Tag: Accelerated Computing

  • Jensen Huang on Nvidia’s Supply Chain Moat, TPU Competition, China Export Controls, and Why Nvidia Will Not Become a Cloud (Dwarkesh Podcast Summary)

    TLDW (Too Long, Didn’t Watch)

    Jensen Huang sat down with Dwarkesh Patel for over 90 minutes covering Nvidia’s supply chain dominance, the TPU threat, why Nvidia will not become a hyperscaler, whether the US should sell AI chips to China, and why Nvidia does not pursue multiple chip architectures at once. Jensen framed Nvidia’s entire business as transforming “electrons into tokens” and argued that Nvidia’s real moat is not any single technology but the full stack ecosystem it has built over two decades. He was blunt about his regret over not investing in Anthropic and OpenAI earlier, passionate about keeping the American tech stack dominant worldwide, and dismissive of the idea that China’s chip industry can be meaningfully contained through export controls.

    Key Takeaways

    1. Nvidia’s moat is the ecosystem, not the chip. Jensen repeatedly emphasized that Nvidia’s competitive advantage comes from CUDA, its massive installed base, its deep partnerships across the entire supply chain, and the fact that it operates in every cloud. The moat is not a single product but an interlocking system that took 20+ years to build.

    2. Supply chain bottlenecks are temporary, energy bottlenecks are not. Jensen argued that CoWoS packaging, HBM memory, EUV capacity, and logic fabrication bottlenecks can all be resolved in two to three years with the right demand signal. The real constraint on AI scaling is energy policy, which takes far longer to fix.

    3. TPUs and ASICs are not an existential threat to Nvidia. Jensen was emphatic that no competitor has demonstrated better price-performance or performance-per-watt than Nvidia, and challenged TPU and Trainium to prove otherwise on public benchmarks like InferenceMAX and MLPerf. He described Anthropic as a “unique instance, not a trend” for TPU adoption.

    4. Jensen regrets not investing in Anthropic and OpenAI earlier. He admitted he did not deeply internalize how much capital AI labs needed and that traditional VC funding was not sufficient for companies at that scale. He described this as a clear miss, though he said Nvidia was not in a position to make multi-billion dollar investments at the time.

    5. Nvidia will not become a hyperscaler. Jensen’s philosophy is “do as much as needed, as little as possible.” Building cloud infrastructure is something other companies can do, so Nvidia supports neoclouds like CoreWeave, Nebius, and Nscale instead of competing with them. Nvidia invests in ecosystem partners rather than vertically integrating into cloud services.

    6. Jensen is strongly against US chip export controls on China. This was the longest and most heated segment of the interview. Jensen argued that China already has abundant compute, energy, and AI researchers, and that export controls have accelerated China’s domestic chip industry while causing the US to concede the world’s second-largest technology market. He compared the situation to how US telecom policy allowed Huawei to dominate global telecommunications.

    7. AI will cause software tool usage to skyrocket, not collapse. Jensen pushed back on the narrative that AI will commoditize software companies. He argued that agents will use existing tools at massive scale, causing the number of instances of products like Excel, Synopsys Design Compiler, and other enterprise tools to grow exponentially.

    8. Nvidia does not pick winners among AI labs. Jensen explained that Nvidia invests across multiple foundation model companies simultaneously and refuses to favor any single one. He cited his own company’s unlikely survival story as the reason for this humility: Nvidia’s original graphics architecture was “precisely wrong” and would have been counted out by anyone picking winners.

    9. Nvidia added Groq for premium token economics. Nvidia recently acquired Groq and is folding it into the CUDA ecosystem because the market is now segmenting into different token tiers. Some customers will pay premium prices for faster response times even at lower throughput, creating a new segment of the inference market.

    10. Without AI, Nvidia would still be very large. Jensen was clear that accelerated computing, not AI specifically, is the foundational mission of the company. Molecular dynamics, quantum chemistry, computational lithography, data processing, and physics simulation all benefit from GPU acceleration regardless of deep learning.

    Detailed Summary

    Nvidia’s Real Business: Electrons to Tokens

    Jensen opened the conversation by reframing Nvidia’s entire value proposition. When Dwarkesh suggested that Nvidia is fundamentally a software company that sends a GDS2 file to TSMC for manufacturing, Jensen pushed back hard. He described Nvidia’s job as transforming electrons into tokens, with everything in between representing an “incredible journey” of artistry, engineering, science, and invention. He said the transformation is far from deeply understood and the journey is far from over, making commoditization unlikely.

    Jensen described Nvidia as operating a philosophy of doing “as much as necessary and as little as possible.” Whatever Nvidia does not need to do itself, it partners with someone else and makes it part of the broader ecosystem. This is why Nvidia has what Jensen called probably the largest ecosystem of partners in the industry, spanning the full supply chain upstream and downstream, application developers, model makers, and all five layers of the AI stack.

    On the question of whether AI will commoditize software companies, Jensen offered a contrarian take. He argued that agents are going to use software tools at unprecedented scale, meaning the number of instances of products like Excel, Cadence design tools, and Synopsys compilers will skyrocket. Today the bottleneck is the number of human engineers. Tomorrow, those engineers will be supported by swarms of agents exploring design spaces and using the same tools humans use today. Jensen said the reason this has not happened yet is simply that the agents are not good enough at using tools. That will change.

    The Supply Chain Moat

    Dwarkesh pressed Jensen on Nvidia’s reported $100 billion (and potentially $250 billion) in purchase commitments with foundries, memory manufacturers, and packaging companies. The question was whether Nvidia’s real moat for the next few years is simply locking up scarce upstream components so that no competitor can get the memory and logic they need to build alternative accelerators.

    Jensen confirmed this is a significant advantage but framed it differently. He said Nvidia has made enormous explicit and implicit commitments upstream. The implicit commitments matter just as much: Jensen personally meets with CEOs across the supply chain to explain the scale of the coming AI industry, convince them to invest in capacity, and assure them that Nvidia’s downstream demand is large enough to justify that investment. Nvidia’s GTC conference serves this purpose too, bringing the entire ecosystem together so upstream suppliers can see downstream demand and vice versa.

    Jensen described a process of systematically “prefetching bottlenecks” years in advance. CoWoS advanced packaging was a major bottleneck two years ago, but Nvidia swarmed it with repeated doubling of capacity until TSMC recognized it as mainstream computing technology rather than a specialty product. More recently, Nvidia has invested in the silicon photonics ecosystem through partnerships with Lumentum and Coherent, invented new packaging technologies, licensed patents to keep the supply chain open, and even invested in new testing equipment like double-sided probing.

    When Dwarkesh asked about the ultimate physical bottlenecks, Jensen surprised him. The hardest bottleneck to solve is not CoWoS or HBM or EUV machines. It is plumbers and electricians needed to build data centers. Jensen used this as a launching point to criticize “doomers” who discourage people from pursuing careers in software engineering or radiology, arguing that scaring people out of these professions creates the real bottlenecks.

    On EUV and logic scaling specifically, Jensen was optimistic. He said no supply chain bottleneck lasts longer than two to three years. Once you can build one of something, you can build ten, and once you can build ten, you can build a million. The key is a clear demand signal. If TSMC is convinced of the demand, ASML will produce enough EUV machines. Meanwhile, Nvidia continues to improve computing efficiency by 10x to 50x per generation through architecture, algorithms, and system design.

    The TPU Question

    Dwarkesh pushed hard on whether Google’s TPUs represent a real threat, noting that two of the top three AI models (Claude and Gemini) were trained on TPUs. Jensen drew a sharp distinction between what Nvidia builds and what a TPU is. Nvidia builds accelerated computing, which serves molecular dynamics, quantum chromodynamics, data processing, fluid dynamics, particle physics, and AI. A TPU is a tensor processing unit optimized for matrix multiplies. Nvidia’s market reach is far greater than any TPU or ASIC can possibly have.

    Jensen emphasized programmability as Nvidia’s core architectural advantage. If you want to invent a new attention mechanism, build a hybrid SSM model, fuse diffusion and autoregressive techniques, or disaggregate computation in a novel way, you need a generally programmable architecture. The only way to achieve 10x or 100x performance leaps (versus the roughly 25% per year from Moore’s Law) is to fundamentally change the algorithm, and that requires the flexibility CUDA provides.

    On the specific question of whether hyperscalers with huge engineering teams can simply write their own kernels and bypass CUDA, Jensen acknowledged they do write custom kernels but argued that Nvidia’s engineers still routinely deliver 2x to 3x speedups when they optimize a partner’s stack. He described Nvidia’s GPUs as “F1 racers” that anyone can drive at 100 mph, but extracting peak performance requires deep architectural expertise. Nvidia uses AI itself to generate many of its optimized kernels.

    Jensen was particularly blunt about public benchmarks. He pointed to Dylan Patel’s InferenceMAX benchmark and said neither TPU nor Trainium has been willing to demonstrate their claimed performance advantages on it. He said Nvidia’s performance-per-TCO is the best in the world, “bar none,” and challenged anyone to prove otherwise.

    Regarding Anthropic’s multi-gigawatt deal with Broadcom and Google for TPUs, Jensen called it “a unique instance, not a trend.” He said without Anthropic, there would be essentially no TPU growth and no Trainium growth. He traced this back to his own mistake: when Anthropic and OpenAI needed multi-billion dollar investments from their compute suppliers to get off the ground, Nvidia was not in a position to provide that capital. Google and AWS were, and in return, Anthropic committed to using their compute.

    Nvidia’s Investment Strategy and Regrets

    Jensen was unusually candid about his regret over not investing in foundation model companies earlier. He said he did not deeply internalize how different AI labs were from typical startups. A traditional VC would never put $5 to $10 billion into a single AI lab, but that was exactly what companies like OpenAI and Anthropic needed. By the time Jensen understood this, Nvidia was not in a financial or cultural position to make those kinds of investments.

    Now, Nvidia has invested approximately $30 billion in OpenAI and $10 billion in Anthropic. Jensen said he is delighted to support both and considers their existence essential for the world. But he acknowledged that these investments came at much higher valuations than would have been possible years earlier.

    Jensen explained Nvidia’s broader investment philosophy: support everyone, do not pick winners. He invests in one foundation model company, he invests in all of them. This comes from hard-won humility. When Nvidia started, there were 60 3D graphics companies. Nvidia’s original architecture was “precisely wrong” and the company would have been at the top of most lists to fail. Jensen said he has enough humility from that experience to know that you cannot predict which AI company will ultimately succeed.

    Why Nvidia Will Not Become a Hyperscaler

    Dwarkesh pointed out that Nvidia has the cash to build and operate its own cloud infrastructure, bypassing the middleman ecosystem that converts CapEx into OpEx for AI labs. Jensen rejected this path based on his core operating philosophy.

    If Nvidia did not build its computing platform, NVLink, and the CUDA ecosystem, nobody else would have done it. He is “completely certain” of that. These are things Nvidia must do. But the world has lots of clouds. If Nvidia did not build a cloud, someone else would show up. So the answer is to support the ecosystem instead: invest in CoreWeave, Nscale, Nebius, and others to help them exist and scale, rather than competing with them.

    Jensen was clear that Nvidia is not trying to be in the financing business either. When OpenAI needed a $30 billion investment before its IPO, Nvidia stepped up because OpenAI needed it and Nvidia deeply believed in the company. But these are targeted ecosystem investments, not a strategic pivot into cloud services.

    On GPU allocation during shortages, Jensen pushed back on the narrative that Nvidia strategically “fractures” the market by giving allocations to smaller neoclouds. He said the process is straightforward: you forecast demand, you place a purchase order, and it is first in, first out. Nvidia never changes prices based on demand. Jensen said he prefers to be dependable and serve as the foundation of the industry rather than extracting maximum short-term value.

    The China Debate

    The longest and most heated section of the interview was Jensen’s case against US chip export controls on China. This was a genuine debate, with Dwarkesh pushing the national security argument and Jensen pushing back forcefully.

    Jensen’s core argument rested on several pillars. First, China already has abundant compute. They manufacture 60% or more of the world’s mainstream chips, have massive energy infrastructure (including empty data centers with full power), and employ roughly 50% of the world’s AI researchers. The threshold of compute needed to build models like Anthropic’s Mythos has already been reached and exceeded by China’s existing infrastructure.

    Second, export controls have backfired. They accelerated China’s domestic chip industry, forced their AI ecosystem to optimize for internal architectures instead of the American tech stack, and caused the United States to concede the second-largest technology market in the world. Jensen compared this directly to how US telecom policy allowed Huawei to dominate global telecommunications infrastructure.

    Third, Jensen argued that AI is a five-layer stack (energy, chips, computing platform, models, applications) and the US needs to win at every layer. Fixating on one layer (models) at the expense of another layer (chips) is counterproductive. If Chinese open source AI models end up optimized for non-American hardware and that stack gets exported to the global south, the Middle East, Africa, and Southeast Asia, the US will have lost something far more valuable than whatever marginal compute advantage the export controls provided.

    Dwarkesh countered with the Mythos example: Anthropic’s new model found thousands of high-severity zero-day vulnerabilities across every major operating system and browser, including one that had existed in OpenBSD for 27 years. If China had enough compute to train and deploy a model like Mythos at scale before the US could prepare, the cyber-offensive capabilities would be devastating.

    Jensen’s response was direct. Mythos was trained on “fairly mundane capacity” that is already abundantly available in China. The amount of compute is not the bottleneck for that kind of breakthrough. Great computer science is, and China has no shortage of brilliant AI researchers. He pointed to DeepSeek as evidence: most advances in AI come from algorithmic innovation, not raw hardware. If China’s researchers can achieve breakthroughs like DeepSeek with limited hardware, imagine what they could do with more.

    Jensen also argued for dialogue over confrontation. He said it is essential that American and Chinese AI researchers are talking to each other, and that both countries agree on what AI should not be used for. The idea that you can prevent AI risks by cutting off chip sales, when the real advances come from algorithms and computer science, reflects a fundamental misunderstanding of how AI progress works.

    The debate ended without resolution, but Jensen’s final point was sharp: “I’m not talking to somebody who woke up a loser. That loser attitude, that loser premise, makes no sense to me.”

    Why Not Multiple Chip Architectures?

    Near the end of the interview, Dwarkesh asked why Nvidia does not run multiple parallel chip projects with different architectures, like a Cerebras-style wafer-scale design or a Dojo-style huge package, or even one without CUDA.

    Jensen’s answer was simple: “We don’t have a better idea.” Nvidia simulates all of these alternative approaches in its internal simulators and they are provably worse. The company works on exactly the projects it wants to work on. If the workload were to change dramatically (not just the algorithms, but the actual market shape), Nvidia might add other accelerators.

    In fact, Nvidia recently did exactly this by acquiring Groq. The inference market is now segmenting into different tiers. Some customers will pay premium prices for extremely fast response times even if throughput is lower. This creates a new “high ASP token” segment that justifies a different point on the performance curve. But Jensen was clear: if he had more money, he would put it all behind Nvidia’s existing architecture, not diversify into alternatives.

    Nvidia Without AI

    Jensen closed by saying that even if the deep learning revolution had never happened, Nvidia would be “very, very large.” The premise of the company has always been that general-purpose computing cannot scale indefinitely and that domain-specific acceleration is the way forward. Molecular dynamics, seismic processing, image processing, computational lithography, quantum chemistry, and data processing all benefit from GPU acceleration regardless of AI. Jensen said the fundamental promise of accelerated computing has not changed “not even a little bit.”

    Thoughts

    This interview is one of the most revealing Jensen Huang conversations in years, partly because Dwarkesh actually pushes back instead of lobbing softballs. A few things stand out.

    The Anthropic regret is real and significant. Jensen is essentially admitting that Nvidia’s biggest strategic miss of the AI era was not understanding that foundation model companies needed supplier-level capital commitments, not VC funding. The fact that Google and AWS used compute investments to lock in Anthropic’s architecture choices has had downstream consequences that Nvidia is still working to unwind. When Jensen says Anthropic is “a unique instance, not a trend” for TPU adoption, he is simultaneously downplaying the threat and revealing exactly how seriously he takes it.

    The China debate is the highlight. Jensen’s argument is more nuanced than it first appears. He is not saying “sell China everything.” He is saying the current binary approach of near-total restriction has backfired by accelerating China’s domestic chip industry and pushing the Chinese AI ecosystem away from the American tech stack. His comparison to the US telecom industry losing global market share to Huawei is pointed and historically grounded. Whether you agree with his conclusion or not, the framing of AI as a five-layer stack where the US needs to compete at every layer is a useful mental model.

    The “electrons to tokens” framing is Jensen at his best. It is a simple metaphor that captures something genuinely complex about where value is created in the AI supply chain. And his insistence that the transformation is “far from deeply understood” is a subtle way of arguing that Nvidia’s competitive position will be durable because the problem space is not close to being solved.

    The Groq acquisition reveal is interesting for what it signals about the inference market. If Nvidia is creating a separate product tier for premium-priced, low-latency tokens, it suggests the company sees inference economics fragmenting significantly. This aligns with the broader trend of AI becoming an enterprise product where different customers have wildly different willingness to pay based on how they use tokens.

    Finally, Jensen’s refusal to diversify chip architectures is a bold bet. “We simulate it all in our simulator, provably worse” is an incredibly confident statement. History is full of companies that were right until they were not. But Nvidia’s track record of 50x generation-over-generation improvements through co-design across processors, fabric, libraries, and algorithms is hard to argue with. The question is whether the current paradigm of transformer-based models on GPU clusters represents a local or global optimum for AI compute.

  • Beyond the Bubble: Jensen Huang on the Future of AI, Robotics, and Global Tech Strategy in 2026

    In a wide-ranging discussion on the No Priors Podcast, NVIDIA Founder and CEO Jensen Huang reflects on the rapid evolution of artificial intelligence throughout 2025 and provides a strategic roadmap for 2026. From the debunking of the “AI Bubble” to the rise of physical robotics and the “ChatGPT moments” coming for digital biology, Huang offers a masterclass in how accelerated computing is reshaping the global economy.


    TL;DW (Too Long; Didn’t Watch)

    • The Core Shift: General-purpose computing (CPUs) has hit a wall; the world is moving permanently to accelerated computing.
    • The Jobs Narrative: AI automates tasks, not purposes. It is solving labor shortages in manufacturing and nursing rather than causing mass unemployment.
    • The 2026 Breakthrough: Digital biology and physical robotics are slated for their “ChatGPT moment” this year.
    • Geopolitics: A nuanced, constructive relationship with China is essential, and open source is the “innovation flywheel” that keeps the U.S. competitive.

    Key Takeaways

    • Scaling Laws & Reasoning: 2025 proved that scaling compute still translates directly to intelligence, specifically through massive improvements in reasoning, grounding, and the elimination of hallucinations.
    • The End of “God AI”: Huang dismisses the myth of a monolithic “God AI.” Instead, the future is a diverse ecosystem of specialized models for biology, physics, coding, and more.
    • Energy as Infrastructure: AI data centers are “AI Factories.” Without a massive expansion in energy (including natural gas and nuclear), the next industrial revolution cannot happen.
    • Tokenomics: The cost of AI inference dropped 100x in 2024 and could drop a billion times over the next decade, making intelligence a near-free commodity.
    • DeepSeek’s Impact: Open-source contributions from China, like DeepSeek, are significantly benefiting American startups and researchers, proving the value of a global open-source ecosystem.

    Detailed Summary

    The “Five-Layer Cake” of AI

    Huang explains AI not as a single app, but as a technology stack: EnergyChipsInfrastructureModelsApplications. He emphasizes that while the public focuses on chatbots, the real revolution is happening in “non-English” languages, such as the languages of proteins, chemicals, and physical movement.

    Task vs. Purpose: The Future of Labor

    Addressing the fear of job loss, Huang uses the “Radiologist Paradox.” While AI now powers nearly 100% of radiology applications, the number of radiologists has actually increased. Why? Because AI handles the task (scanning images), allowing the human to focus on the purpose (diagnosis and research). This same framework applies to software engineers: their purpose is solving problems, not just writing syntax.

    Robotics and Physical AI

    Huang is incredibly optimistic about robotics. He predicts a future where “everything that moves will be robotic.” By applying reasoning models to physical machines, we are moving from “digital rails” (pre-programmed paths) to autonomous agents that can navigate unknown environments. He foresees a trillion-dollar repair and maintenance industry emerging to support the billions of robots that will eventually inhabit our world.

    The “Bubble” Debate

    Is there an AI bubble? Huang argues “No.” He points to the desperate, unsatisfied demand for compute capacity across every industry. He notes that if chatbots disappeared tomorrow, NVIDIA would still thrive because the fundamental architecture of the world’s $100 trillion GDP is shifting from CPUs to GPUs to stay productive.


    Analysis & Thoughts

    Jensen Huang’s perspective is distinct because he views AI through the lens of industrial production. By calling data centers “factories” and tokens “output,” he strips away the “magic” of AI and reveals it as a standard industrial revolution—one that requires power, raw materials (data/chips), and specialized labor.

    His defense of Open Source is perhaps the most critical takeaway for policymakers. By arguing that open source prevents “suffocation” for startups and 100-year-old industrial companies, he positions transparency as a national security asset rather than a liability. As we head into 2026, the focus is clearly shifting from “Can the model talk?” to “Can the model build a protein or drive a truck?”

  • Jensen Huang on Joe Rogan: AI’s Future, Nuclear Energy, and NVIDIA’s Near-Death Origin Story

    In a landmark episode of the Joe Rogan Experience (JRE #2422), NVIDIA CEO Jensen Huang sat down for a rare, deep-dive conversation covering everything from the granular history of the GPU to the philosophical implications of artificial general intelligence. Huang, currently the longest-running tech CEO in the world, offered a fascinating look behind the curtain of the world’s most valuable company.

    For those who don’t have three hours to spare, we’ve compiled the “Too Long; Didn’t Watch” breakdown, key takeaways, and a detailed summary of this historic conversation.

    TL;DW (Too Long; Didn’t Watch)

    • The OpenAI Connection: Jensen personally delivered the first AI supercomputer (DGX-1) to Elon Musk and the OpenAI team in 2016, a pivotal moment that kickstarted the modern AI race.
    • The “Sega Moment”: NVIDIA almost went bankrupt in 1995. They were saved only because the CEO of Sega invested $5 million in them after Jensen admitted their technology was flawed and the contract needed to be broken.
    • Nuclear AI: Huang predicts that within the next decade, AI factories (data centers) will likely be powered by small, on-site nuclear reactors to handle immense energy demands.
    • Driven by Fear: Despite his success, Huang wakes up every morning with a “fear of failure” rather than a desire for success. He believes this anxiety is essential for survival in the tech industry.
    • The Immigrant Hustle: Huang’s childhood involved moving from Thailand to a reform school in rural Kentucky where he cleaned toilets and smoked cigarettes at age nine to fit in.

    Key Takeaways

    1. AI as a “Universal Function Approximator”

    Huang provided one of the most lucid non-technical explanations of deep learning to date. He described AI not just as a chatbot, but as a “universal function approximator.” While traditional software requires humans to write the function (input -> code -> output), AI flips this. You give it the input and the desired output, and the neural network figures out the function in the middle. This allows computers to solve problems for which humans cannot write the code, such as curing diseases or solving complex physics.

    2. The Future of Work and Energy

    The conversation touched heavily on resources. Huang noted that we are in a transition from “Moore’s Law” (doubling performance) to “Huang’s Law” (accelerated computing), where the cost of computing drops while energy efficiency skyrockets. However, the sheer scale of AI requires massive power. He envisions a future of “energy abundance” driven by nuclear power, which will support the massive “AI factories” of the future.

    3. Safety Through “Smartness”

    Addressing Rogan’s concerns about AI safety and rogue sentience, Huang argued that “smarter is safer.” He compared AI to cars: a 1,000-horsepower car is safer than a Model T because the technology is channeled into braking, handling, and safety systems. Similarly, future computing power will be channeled into “reflection” and “fact-checking” before an AI gives an answer, reducing hallucinations and danger.

    Detailed Summary

    The Origin of the AI Boom

    The interview began with a look back at the relationship between NVIDIA and Elon Musk. In 2016, NVIDIA spent billions developing the DGX-1 supercomputer. At the time, no one understood it or wanted to buy it—except Musk. Jensen personally delivered the first unit to a small office in San Francisco where the OpenAI team (including Ilya Sutskever) was working. That hardware trained the early models that eventually became ChatGPT.

    The “Struggle” and the Sega Pivot

    Perhaps the most compelling part of the interview was Huang’s recounting of NVIDIA’s early days. In 1995, NVIDIA was building 3D graphics chips using “forward texture mapping” and curved surfaces—a strategy that turned out to be technically wrong compared to the industry standard. Facing bankruptcy, Huang had to tell his only major partner, Sega, that NVIDIA could not complete their console contract.

    In a move that saved the company, the CEO of Sega, who liked Jensen personally, agreed to invest the remaining $5 million of their contract into NVIDIA anyway. Jensen used that money to pivot, buying an emulator to test a new chip architecture (RIVA 128) that eventually revolutionized PC gaming. Huang admits that without that act of kindness and luck, NVIDIA would not exist today.

    From Kentucky to Silicon Valley

    Huang shared his “American Dream” story. Born in Taiwan and raised in Thailand, his parents sent him and his brother to the U.S. for safety during civil unrest. Due to a misunderstanding, they were enrolled in the Oneida Baptist Institute in Kentucky, which turned out to be a reform school for troubled youth. Huang described a rough upbringing where he was the youngest student, his roommate was a 17-year-old recovering from a knife fight, and he was responsible for cleaning the dorm toilets. He credits these hardships with giving him a high tolerance for pain and suffering—traits he says are required for entrepreneurship.

    The Philosophy of Leadership

    When asked how he stays motivated as the head of a trillion-dollar company, Huang gave a surprising answer: “I have a greater drive from not wanting to fail than the drive of wanting to succeed.” He described living in a constant state of “low-grade anxiety” that the company is 30 days away from going out of business. This paranoia, he argues, keeps the company honest, grounded, and agile enough to “surf the waves” of technological chaos.

    Some Thoughts

    What stands out most in this interview is the lack of “tech messiah” complex often seen in Silicon Valley. Jensen Huang does not present himself as a visionary who saw it all coming. Instead, he presents himself as a survivor—someone who was wrong about technology multiple times, who was saved by the grace of a Japanese executive, and who lucked into the AI boom because researchers happened to buy NVIDIA gaming cards to train neural networks.

    This humility, combined with the technical depth of how NVIDIA is re-architecting the world’s computing infrastructure, makes this one of the most essential JRE episodes for understanding where the future is heading. It serves as a reminder that the “overnight success” of AI is actually the result of 30 years of near-failures, pivots, and relentless problem-solving.