PJFP.com

Pursuit of Joy, Fulfillment, and Purpose

Tag: AI bubble

  • All-In Podcast Breaks Down OpenAI’s Turbulent Week, the AI Arms Race, and Socialism’s Surge in America

    November 8, 2025

    In the latest episode of the All-In Podcast, aired on November 7, 2025, hosts Jason Calacanis, Chamath Palihapitiya, David Sacks, and guest Brad Gerstner (with David Friedberg absent) delivered a packed discussion on the tech world’s hottest topics. From OpenAI’s public relations mishaps and massive infrastructure bets to the intensifying U.S.-China AI rivalry, market volatility, and the surprising rise of socialism in U.S. politics, the episode painted a vivid picture of an industry at a crossroads. Here’s a deep dive into the key takeaways.

    OpenAI’s “Rough Week”: From Altman’s Feistiness to CFO’s Backstop Blunder

    The podcast kicked off with a spotlight on OpenAI, which has been under intense scrutiny following CEO Sam Altman’s appearance on the BG2 podcast. Gerstner, who hosts BG2, recounted asking Altman about OpenAI’s reported $13 billion in revenue juxtaposed against $1.4 trillion in spending commitments for data centers and infrastructure. Altman’s response—offering to find buyers for Gerstner’s shares if he was unhappy—went viral, sparking debates about OpenAI’s financial health and the broader AI “bubble.”

    Gerstner defended the question as “mundane” and fair, noting that Altman later clarified OpenAI’s revenue is growing steeply, projecting a $20 billion run rate by year’s end. Palihapitiya downplayed the market’s reaction, attributing stock dips in companies like Microsoft and Nvidia to natural “risk-off” cycles rather than OpenAI-specific drama. “Every now and then you have a bad day,” he said, suggesting Altman might regret his tone but emphasizing broader market dynamics.

    The conversation escalated with OpenAI CFO Sarah Friar’s Wall Street Journal comments hoping for a U.S. government “backstop” to finance infrastructure. This fueled bailout rumors, prompting Friar to clarify she meant public-private partnerships for industrial capacity, not direct aid. Sacks, recently appointed as the White House AI “czar,” emphatically stated, “There’s not going to be a federal bailout for AI.” He praised the sector’s competitiveness, noting rivals like Grok, Claude, and Gemini ensure no single player is “too big to fail.”

    The hosts debated OpenAI’s revenue model, with Calacanis highlighting its consumer-heavy focus (estimated 75% from subscriptions like ChatGPT Plus at $240/year) versus competitors like Anthropic’s API-driven enterprise approach. Gerstner expressed optimism in the “AI supercycle,” betting on long-term growth despite headwinds like free alternatives from Google and Apple.

    The AI Race: Jensen Huang’s Warning and the Call for Federal Unity

    Shifting gears, the panel addressed Nvidia CEO Jensen Huang’s stark prediction to the Financial Times: “China is going to win the AI race.” Huang cited U.S. regulatory hurdles and power constraints as key obstacles, contrasting with China’s centralized support for GPUs and data centers.

    Gerstner echoed Huang’s call for acceleration, praising federal efforts to clear regulatory barriers for power infrastructure. Palihapitiya warned of Chinese open-source models like Qwen gaining traction, as seen in products like Cursor 2.0. Sacks advocated for a federal AI framework to preempt a patchwork of state regulations, arguing blue states like California and New York could impose “ideological capture” via DEI mandates disguised as anti-discrimination rules. “We need federal preemption,” he urged, invoking the Commerce Clause to ensure a unified national market.

    Calacanis tied this to environmental successes like California’s emissions standards but cautioned against overregulation stifling innovation. The consensus: Without streamlined permitting and behind-the-meter power generation, the U.S. risks ceding ground to China.

    Market Woes: Consumer Cracks, Layoffs, and the AI Job Debate

    The discussion turned to broader economic signals, with Gerstner highlighting a “two-tier economy” where high-end consumers thrive while lower-income groups falter. Credit card delinquencies at 2009 levels, regional bank rollovers, and earnings beats tempered by cautious forecasts painted a picture of volatility. Palihapitiya attributed recent market dips to year-end rebalancing, not AI hype, predicting a “risk-on” rebound by February.

    A heated exchange ensued over layoffs and unemployment, particularly among 20-24-year-olds (at 9.2%). Calacanis attributed spikes to AI displacing entry-level white-collar jobs, citing startup trends and software deployments. Sacks countered with data showing stable white-collar employment percentages, calling AI blame “anecdotal” and suggesting factors like unemployable “woke” degrees or over-hiring during zero-interest-rate policies (ZIRP). Gerstner aligned with Sacks, noting companies’ shift to “flatter is faster” efficiency cultures, per Morgan Stanley analysis.

    Inflation ticking up to 3% was flagged as a barrier to rate cuts, with Calacanis criticizing the administration for downplaying it. Trump’s net approval rating has dipped to -13%, with 65% of Americans feeling he’s fallen short on middle-class issues. Palihapitiya called for domestic wins, like using trade deal funds (e.g., $3.2 trillion from Japan and allies) to boost earnings.

    Socialism’s Rise: Mamdani’s NYC Win and the Filibuster Nuclear Option

    The episode’s most provocative segment analyzed Democratic socialist Zohran Mamdani’s upset victory as New York City’s mayor-elect. Mamdani, promising rent freezes, free transit, and higher taxes on the rich (pushing rates to 54%), won narrowly at 50.4%. Calacanis noted polling showed strong support from young women and recent transplants, while native New Yorkers largely rejected him.

    Palihapitiya linked this to a “broken generational compact,” quoting Peter Thiel on student debt and housing unaffordability fueling anti-capitalist sentiment. He advocated reforming student loans via market pricing and even expressed newfound sympathy for forgiveness—if tied to systemic overhaul. Sacks warned of Democrats shifting left, with “centrist” figures like Joe Manchin and Kyrsten Sinema exiting, leaving energy with revolutionaries. He tied this to the ongoing government shutdown, blaming Democrats’ filibuster leverage and urging Republicans to eliminate it for a “nuclear option” to pass reforms.

    Gerstner, fresh from debating “ban the billionaires” at Stanford (where many students initially favored it), stressed Republicans must address affordability through policies like no taxes on tips or overtime. He predicted an A/B test: San Francisco’s centrist turnaround versus New York’s potential chaos under Mamdani.

    Holiday Cheer and Final Thoughts

    Amid the heavy topics, the hosts plugged their All-In Holiday Spectacular on December 6, promising comedy roasts by Kill Tony, poker, and open bar. Calacanis shared updates on his Founder University expansions to Saudi Arabia and Japan.

    Overall, the episode underscored optimism in AI’s transformative potential tempered by real-world challenges: financial scrutiny, geopolitical rivalry, economic inequality, and political polarization. As Gerstner put it, “Time is on your side if you’re betting over a five- to 10-year horizon.” With Trump’s mandate in play, the panel urged swift action to secure America’s edge—or risk socialism’s further ascent.

  • The Benefits of Bubbles: Why the AI Boom’s Madness Is Humanity’s Shortcut to Progress

    TL;DR:

    Ben Thompson’s “The Benefits of Bubbles” argues that financial manias like today’s AI boom, while destined to burst, play a crucial role in accelerating innovation and infrastructure. Drawing on Carlota Perez and the newer work of Byrne Hobart and Tobias Huber, Thompson contends that bubbles aren’t just speculative excess—they’re coordination mechanisms that align capital, talent, and belief around transformative technologies. Even when they collapse, the lasting payoff is progress.

    Summary

    Ben Thompson revisits the classic question: are bubbles inherently bad? His answer is nuanced. Yes, bubbles pop. But they also build. Thompson situates the current AI explosion—OpenAI’s trillion-dollar commitments and hyperscaler spending sprees—within the historical pattern described by Carlota Perez in Technological Revolutions and Financial Capital. Perez’s thesis: every major technological revolution begins with an “Installation Phase” fueled by speculation and waste. The bubble funds infrastructure that outlasts its financiers, paving the way for a “Deployment Phase” where society reaps the benefits.

    Thompson extends this logic using Byrne Hobart and Tobias Huber’s concept of “Inflection Bubbles,” which he contrasts with destructive “Mean-Reversion Bubbles” like subprime mortgages. Inflection bubbles occur when investors bet that the future will be radically different, not just marginally improved. The dot-com bubble, for instance, built the Internet’s cognitive and physical backbone—from fiber networks to AJAX-driven interactivity—that enabled the next two decades of growth.

    Applied to AI, Thompson sees similar dynamics. The bubble is creating massive investment in GPUs, fabs, and—most importantly—power generation. Unlike chips, which decay quickly, energy infrastructure lasts decades and underpins future innovation. Microsoft, Amazon, and others are already building gigawatts of new capacity, potentially spurring a long-overdue resurgence in energy growth. This, Thompson suggests, may become the “railroads and power plants” of the AI age.

    He also highlights AI’s “cognitive capacity payoff.” As everyone from startups to Chinese labs works on AI, knowledge diffusion is near-instantaneous, driving rapid iteration. Investment bubbles fund parallel experimentation—new chip architectures, lithography startups, and fundamental rethinks of computing models. Even failures accelerate collective learning. Hobart and Huber call this “parallelized innovation”: bubbles compress decades of progress into a few intense years through shared belief and FOMO-driven coordination.

    Thompson concludes with a warning against stagnation. He contrasts the AI mania with the risk-aversion of the 2010s, when Big Tech calcified and innovation slowed. Bubbles, for all their chaos, restore the “spiritual energy” of creation—a willingness to take irrational risks for something new. While the AI boom will eventually deflate, its benefits, like power infrastructure and new computing paradigms, may endure for generations.

    Key Takeaways

    • Bubbles are essential accelerators. They fund infrastructure and innovation that rational markets never would.
    • Carlota Perez’s “Installation Phase” framework explains how speculative capital lays the groundwork for future growth.
    • Inflection bubbles drive paradigm shifts. They aren’t about small improvements—they bet on orders-of-magnitude change.
    • The AI bubble is building the real economy. Fabs, power plants, and chip ecosystems are long-term assets disguised as mania.
    • Cognitive capacity grows in parallel. When everyone builds simultaneously, progress compounds across fields.
    • FOMO has a purpose. Speculative energy coordinates capital and creativity at scale.
    • Stagnation is the alternative. Without bubbles, societies drift toward safety, bureaucracy, and creative paralysis.
    • The true payoff of AI may be infrastructure. Power generation, not GPUs, could be the era’s lasting legacy.
    • Belief drives progress. Mania is a social technology for collective imagination.

    1-Sentence Summary:

    Ben Thompson argues that the AI boom is a classic “inflection bubble” — a burst of coordinated mania that wastes money in the short term but builds the physical and intellectual foundations of the next technological age.

  • The AI Revolution Unveiled: Jonathan Ross on Groq, NVIDIA, and the Future of Inference


    TL;DR

    Jonathan Ross, Groq’s CEO, predicts inference will eclipse training in AI’s future, with Groq’s Language Processing Units (LPUs) outpacing NVIDIA’s GPUs in cost and efficiency. He envisions synthetic data breaking scaling limits, a $1.5 billion Saudi revenue deal fueling Groq’s growth, and AI unlocking human potential through prompt engineering, though he warns of an overabundance trap.

    Detailed Summary

    In a captivating 20VC episode with Harry Stebbings, Jonathan Ross, the mastermind behind Groq and Google’s original Tensor Processing Unit (TPU), outlines a transformative vision for AI. Ross asserts that inference—deploying AI models in real-world scenarios—will soon overshadow training, challenging NVIDIA’s GPU stronghold. Groq’s LPUs, engineered for affordable, high-volume inference, deliver over five times the cost efficiency and three times the energy savings of NVIDIA’s training-focused GPUs by avoiding external memory like HBM. He champions synthetic data from advanced models as a breakthrough, dismantling scaling law barriers and redirecting focus to compute, data, and algorithmic bottlenecks.

    Groq’s explosive growth—from 640 chips in early 2024 to over 40,000 by year-end, aiming for 2 million in 2025—is propelled by a $1.5 billion Saudi revenue deal, not a funding round. Partners like Aramco fund the capital expenditure, sharing profits after a set return, liberating Groq from financial limits. Ross targets NVIDIA’s 40% inference revenue as a weak spot, cautions against a data center investment bubble driven by hyperscaler exaggeration, and foresees AI value concentrating among giants via a power law—yet Groq plans to join them by addressing unmet demands. Reflecting on Groq’s near-failure, salvaged by “Grok Bonds,” he dreams of AI enhancing human agency, potentially empowering 1.4 billion Africans through prompt engineering, while urging vigilance against settling for “good enough” in an abundant future.

    The Big Questions Raised—and Answered

    Ross’s insights provoke profound metaphorical questions about AI’s trajectory and humanity’s role. Here’s what the discussion implicitly asks, paired with his responses:

    • What happens when creation becomes so easy it redefines who gets to create?
      • Answer: Ross champions prompt engineering as a revolutionary force, turning speech into a tool that could unleash 1.4 billion African entrepreneurs. By making creation as simple as talking, AI could shift power from tech gatekeepers to the masses, sparking a global wave of innovation.
    • Can an underdog outrun a titan in a scale-driven game?
      • Answer: Groq can outpace NVIDIA, Ross asserts, by targeting inference—a massive, underserved market—rather than battling over training. With no HBM bottlenecks and a scalable Saudi-backed model, Groq’s agility could topple NVIDIA’s inference share, proving size isn’t everything.
    • What’s the human cost when machines replace our effort?
      • Answer: Ross likens LPUs to tireless employees, predicting a shift from labor to compute-driven economics. Yet, he warns of “financial diabetes”—a loss of drive in an AI-abundant world—urging us to preserve agency lest we become passive consumers of convenience.
    • Is the AI gold rush a promise or a pipe dream?
      • Answer: It’s both. Ross foresees billions wasted on overhyped data centers and “AI t-shirts,” but insists the total value created will outstrip losses. The winners, like Groq, will solve real problems, not chase fleeting trends.
    • How do we keep innovation’s spirit alive amid efficiency’s rise?
      • Answer: By prioritizing human agency and delegation—Ross’s “anti-founder mode”—over micromanagement, he says. Groq’s 25 million token-per-second coin aligns teams to innovate, not just optimize, ensuring efficiency amplifies creativity.
    • What’s the price of chasing a future that might not materialize?
      • Answer: Seven years of struggle taught Ross the emotional and financial toll is steep—Groq nearly died—but strategic bets (like inference) pay off when the wave hits. Resilience turns risk into reward.
    • Will AI’s pursuit drown us in wasted ambition?
      • Answer: Partially, yes—Ross cites VC’s “Keynesian Beauty Contest,” where cash floods copycats. But hyperscalers and problem-solvers like Groq will rise above the noise, turning ambition into tangible progress.
    • Can abundance liberate us without trapping us in ease?
      • Answer: Ross fears AI could erode striving, drawing from his boom-bust childhood. Prompt engineering offers liberation—empowering billions—but only if outliers reject “good enough” and push for excellence.

    Jonathan Ross’s vision is a clarion call: AI’s future isn’t just about faster chips or bigger models—it’s about who wields the tools and how they shape us. Groq’s battle with NVIDIA isn’t merely corporate; it’s a referendum on whether innovation can stay human-centric in an age of machine abundance. As Ross puts it, “Your job is to get positioned for the wave”—and he’s riding it, challenging us to paddle alongside or risk being left ashore.