PJFP.com

Pursuit of Joy, Fulfillment, and Purpose

Category: AI

  • The Benefits of Bubbles: Why the AI Boom’s Madness Is Humanity’s Shortcut to Progress

    TL;DR:

    Ben Thompson’s “The Benefits of Bubbles” argues that financial manias like today’s AI boom, while destined to burst, play a crucial role in accelerating innovation and infrastructure. Drawing on Carlota Perez and the newer work of Byrne Hobart and Tobias Huber, Thompson contends that bubbles aren’t just speculative excess—they’re coordination mechanisms that align capital, talent, and belief around transformative technologies. Even when they collapse, the lasting payoff is progress.

    Summary

    Ben Thompson revisits the classic question: are bubbles inherently bad? His answer is nuanced. Yes, bubbles pop. But they also build. Thompson situates the current AI explosion—OpenAI’s trillion-dollar commitments and hyperscaler spending sprees—within the historical pattern described by Carlota Perez in Technological Revolutions and Financial Capital. Perez’s thesis: every major technological revolution begins with an “Installation Phase” fueled by speculation and waste. The bubble funds infrastructure that outlasts its financiers, paving the way for a “Deployment Phase” where society reaps the benefits.

    Thompson extends this logic using Byrne Hobart and Tobias Huber’s concept of “Inflection Bubbles,” which he contrasts with destructive “Mean-Reversion Bubbles” like subprime mortgages. Inflection bubbles occur when investors bet that the future will be radically different, not just marginally improved. The dot-com bubble, for instance, built the Internet’s cognitive and physical backbone—from fiber networks to AJAX-driven interactivity—that enabled the next two decades of growth.

    Applied to AI, Thompson sees similar dynamics. The bubble is creating massive investment in GPUs, fabs, and—most importantly—power generation. Unlike chips, which decay quickly, energy infrastructure lasts decades and underpins future innovation. Microsoft, Amazon, and others are already building gigawatts of new capacity, potentially spurring a long-overdue resurgence in energy growth. This, Thompson suggests, may become the “railroads and power plants” of the AI age.

    He also highlights AI’s “cognitive capacity payoff.” As everyone from startups to Chinese labs works on AI, knowledge diffusion is near-instantaneous, driving rapid iteration. Investment bubbles fund parallel experimentation—new chip architectures, lithography startups, and fundamental rethinks of computing models. Even failures accelerate collective learning. Hobart and Huber call this “parallelized innovation”: bubbles compress decades of progress into a few intense years through shared belief and FOMO-driven coordination.

    Thompson concludes with a warning against stagnation. He contrasts the AI mania with the risk-aversion of the 2010s, when Big Tech calcified and innovation slowed. Bubbles, for all their chaos, restore the “spiritual energy” of creation—a willingness to take irrational risks for something new. While the AI boom will eventually deflate, its benefits, like power infrastructure and new computing paradigms, may endure for generations.

    Key Takeaways

    • Bubbles are essential accelerators. They fund infrastructure and innovation that rational markets never would.
    • Carlota Perez’s “Installation Phase” framework explains how speculative capital lays the groundwork for future growth.
    • Inflection bubbles drive paradigm shifts. They aren’t about small improvements—they bet on orders-of-magnitude change.
    • The AI bubble is building the real economy. Fabs, power plants, and chip ecosystems are long-term assets disguised as mania.
    • Cognitive capacity grows in parallel. When everyone builds simultaneously, progress compounds across fields.
    • FOMO has a purpose. Speculative energy coordinates capital and creativity at scale.
    • Stagnation is the alternative. Without bubbles, societies drift toward safety, bureaucracy, and creative paralysis.
    • The true payoff of AI may be infrastructure. Power generation, not GPUs, could be the era’s lasting legacy.
    • Belief drives progress. Mania is a social technology for collective imagination.

    1-Sentence Summary:

    Ben Thompson argues that the AI boom is a classic “inflection bubble” — a burst of coordinated mania that wastes money in the short term but builds the physical and intellectual foundations of the next technological age.

  • Sam Altman on Trust, Persuasion, and the Future of Intelligence: A Deep Dive into AI, Power, and Human Adaptation

    TL;DW

    Sam Altman, CEO of OpenAI, explains how AI will soon revolutionize productivity, science, and society. GPT-6 will represent the first leap from imitation to original discovery. Within a few years, major organizations will be mostly AI-run, energy will become the key constraint, and the way humans work, communicate, and learn will change permanently. Yet, trust, persuasion, and meaning remain human domains.

    Key Takeaways

    OpenAI’s speed comes from focus, delegation, and clarity. Hardware efforts mirror software culture despite slower cycles. Email is “very bad,” Slack only slightly better—AI-native collaboration tools will replace them. GPT-6 will make new scientific discoveries, not just summarize others. Billion-dollar companies could run with two or three people and AI systems, though social trust will slow adoption. Governments will inevitably act as insurers of last resort for AI but shouldn’t control it. AI trust depends on neutrality—paid bias would destroy user confidence. Energy is the new bottleneck, with short-term reliance on natural gas and long-term fusion and solar dominance. Education and work will shift toward AI literacy, while privacy, free expression, and adult autonomy remain central. The real danger isn’t rogue AI but subtle, unintentional persuasion shaping global beliefs. Books and culture will survive, but the way we work and think will be transformed.

    Summary

    Altman begins by describing how OpenAI achieved rapid progress through delegation and simplicity. The company’s mission is clearer than ever: build the infrastructure and intelligence needed for AGI. Hardware projects now run with the same creative intensity as software, though timelines are longer and risk higher.

    He views traditional communication systems as broken. Email creates inertia and fake productivity; Slack is only a temporary fix. Altman foresees a fully AI-driven coordination layer where agents manage most tasks autonomously, escalating to humans only when needed.

    GPT-6, he says, may become the first AI to generate new science rather than assist with existing research—a leap comparable to GPT-3’s Turing-test breakthrough. Within a few years, divisions of OpenAI could be 85% AI-run. Billion-dollar companies will operate with tiny human teams and vast AI infrastructure. Society, however, will lag in trust—people irrationally prefer human judgment even when AIs outperform them.

    Governments, he predicts, will become the “insurer of last resort” for the AI-driven economy, similar to their role in finance and nuclear energy. He opposes overregulation but accepts deeper state involvement. Trust and transparency will be vital; AI products must not accept paid manipulation. A single biased recommendation would destroy ChatGPT’s relationship with users.

    Commerce will evolve: neutral commissions and low margins will replace ad taxes. Altman welcomes shrinking profit margins as signs of efficiency. He sees AI as a driver of abundance, reducing costs across industries but expanding opportunity through scale.

    Creativity and art will remain human in meaning even as AI equals or surpasses technical skill. AI-generated poetry may reach “8.8 out of 10” quality soon, perhaps even a perfect 10—but emotional context and authorship will still matter. The process of deciding what is great may always be human.

    Energy, not compute, is the ultimate constraint. “We need more electrons,” he says. Natural gas will fill the gap short term, while fusion and solar power dominate the future. He remains bullish on fusion and expects it to combine with solar in driving abundance.

    Education will shift from degrees to capability. College returns will fall while AI literacy becomes essential. Instead of formal training, people will learn through AI itself—asking it to teach them how to use it better. Institutions will resist change, but individuals will adapt faster.

    Privacy and freedom of use are core principles. Altman wants adults treated like adults, protected by doctor-level confidentiality with AI. However, guardrails remain for users in mental distress. He values expressive freedom but sees the need for mental-health-aware design.

    The most profound risk he highlights isn’t rogue superintelligence but “accidental persuasion”—AI subtly influencing beliefs at scale without intent. Global reliance on a few large models could create unseen cultural drift. He worries about AI’s power to nudge societies rather than destroy them.

    Culturally, he expects the rhythm of daily work to change completely. Emails, meetings, and Slack will vanish, replaced by AI mediation. Family life, friendship, and nature will remain largely untouched. Books will persist but as a smaller share of learning, displaced by interactive, AI-driven experiences.

    Altman’s philosophical close: one day, humanity will build a safe, self-improving superintelligence. Before it begins, someone must type the first prompt. His question—what should those words be?—remains unanswered, a reflection of humility before the unknown future of intelligence.

  • AI vs Human Intelligence: The End of Cognitive Work?

    In a profound and unsettling conversation on “The Journey Man,” Raoul Pal sits down with Emad Mostaque, co-founder of Stability AI, to discuss the imminent ‘Economic Singularity.’ Their core thesis: super-intelligent, rapidly cheapening AI is poised to make all human cognitive and physical labor economically obsolete within the next 1-3 years. This shift will fundamentally break and reshape our current economic models, society, and the very concept of value.

    This isn’t a far-off science fiction scenario; they argue it’s an economic reality set to unfold within the next 1,000 days. We’ve captured the full summary, key takeaways, and detailed breakdown of their entire discussion below.

    🚀 Too Long; Didn’t Watch (TL;DW)

    The video is a discussion about how super-intelligent, rapidly cheapening AI is poised to make all human cognitive and physical labor economically obsolete within the next 1-3 years, leading to an “economic singularity” that will fundamentally break and reshape our current economic models, society, and the very concept of value.

    Executive Summary: The Coming Singularity

    Emad Mostaque argues we are at an “intelligence inversion” point, where AI intelligence is becoming uncapped and incredibly cheap, while human intelligence is fixed. The cost of AI-driven cognitive work is plummeting so fast that a full-time AI “worker” will cost less than a dollar a day within the next year.

    This collapse in the price of labor—both cognitive and, soon after, physical (via humanoid robots)—will trigger an “economic singularity” within the next 1,000 days. This event will render traditional economic models, like the Fed’s control over inflation and unemployment, completely non-functional. With the value of labor going to zero, the tax base evaporates and the entire system breaks. The only advice: start using these AI tools daily (what Mostaque calls “vibe coding”) to adapt your thinking and stay on the cutting edge.

    Key Takeaways from the Discussion

    • New Economic Model (MIND): Mostaque introduces a new economic theory for the AI age, moving beyond old scarcity-based models. It identifies four key capitals: Material, Intelligence, Network, and Diversity.
    • The Intelligence Inversion: We are at a point where AI intelligence is becoming uncapped and incredibly cheap, while human intelligence is fixed. AI doesn’t need to sleep or eat, and its cost is collapsing.
    • The End of Cognitive Work: The cost of AI-driven cognitive work is plummeting. What cost $600 per million tokens will soon cost pennies, making the cost of a full-time cognitive AI worker less than a dollar a day within the next year.
    • The “Economic Singularity” is Imminent: This price collapse will lead to an “economic singularity,” where current economic models no longer function. They predict this societal-level disruption will happen within the next 1,000 days, or 1-3 years.
    • AI Will Saturate All Benchmarks: AI is already winning Olympiads in physics, math, and coding. It’s predicted that AI will meet or exceed top-human performance on every cognitive benchmark by 2027.
    • Physical Labor is Next: This isn’t limited to cognitive work. Humanoid robots, like Tesla’s Optimus, will also drive the cost of physical labor to near-zero, replacing everyone from truck drivers to factory workers.
    • The New Value of Humans: In a world where AI performs all labor, human value will shift to things like network connections, community, and unique human experiences.
    • Action Plan – “Vibe Coding”: The single most important thing individuals can do is to start using these AI tools daily. Mostaque calls this “vibe coding”—using AI agents and models to build things, ask questions, and change the way you think to stay on the cutting edge.
    • The “Life Raft”: Both speakers agree the future is unpredictable. This uncertainty leads them to conclude that digital assets (crypto) may become a primary store of value as people flee a traditional system that is fundamentally breaking.

    Watch the full, mind-bending conversation here to get the complete context from Raoul Pal and Emad Mostaque.

    Detailed Summary: The End of Scarcity Economics

    The conversation begins with Raoul Pal introducing his guest, Emad Mostaque, who has developed a new economic theory for the “exponential age.” Emad explains that traditional economics, built on scarcity, is obsolete. His new model is based on generative AI and redefines capital into four types: Material, Intelligence, Network, and Diversity (MIND).

    The Intelligence Inversion and Collapse of Labor

    The core of the discussion is the concept of an “intelligence inversion.” AI models are not only matching but rapidly exceeding human intelligence across all fields, including math, physics, and medicine. More importantly, the cost of this intelligence is collapsing. Emad calculates that the cost for an AI to perform a full day’s worth of human cognitive work will soon be pennies. This development, he argues, will make almost all human cognitive labor (work done at a computer) economically worthless within the next 1-3 years.

    The Economic Singularity

    This leads to what Pal calls the “economic singularity.” When the value of labor goes to zero, the entire economic system breaks. The Federal Reserve’s tools become useless, companies will stop hiring graduates and then fire existing workers, and the tax base (which in the US is mostly income tax) will evaporate.

    The speakers stress that this isn’t a distant future; AI is predicted to “saturate” or beat all human benchmarks by 2027. This revolution extends to physical labor as well. The rise of humanoid robots means all manual labor will also go to zero in value, with robots costing perhaps a dollar an hour.

    Rethinking Value and The Path Forward

    With all labor (cognitive and physical) becoming worthless, the nature of value itself changes. They posit that the only scarce things left will be human attention, human-to-human network connections, and provably scarce digital assets. They see the coming boom in digital assets as a direct consequence of this singularity, as people panic and seek a “life raft” out of the old, collapsing system.

    They conclude by discussing what an individual can do. Emad’s primary advice is to engage with the technology immediately. He encourages “vibe coding,” which means using AI tools and agents daily to build, create, and learn. This, he says, is the only way to adapt your thinking and stay relevant in the transition. They both agree the future is completely unknown, but that embracing the technology is the only path forward.

  • Composer: Building a Fast Frontier Model with Reinforcement Learning

    Composer represents Cursor’s most ambitious step yet toward a new generation of intelligent, high-speed coding agents. Built through deep reinforcement learning (RL) and large-scale infrastructure, Composer delivers frontier-level results at speeds up to four times faster than comparable models:contentReference[oaicite:0]{index=0}. It isn’t just another large language model; it’s an actively trained software engineering assistant optimized to think, plan, and code with precision — in real time.

    From Cheetah to Composer: The Evolution of Speed

    The origins of Composer go back to an experimental prototype called Cheetah, an agent Cursor developed to study how much faster coding models could get before hitting usability limits. Developers consistently preferred the speed and fluidity of an agent that responded instantly, keeping them “in flow.” Cheetah proved the concept, but it was Composer that matured it — integrating reinforcement learning and mixture-of-experts (MoE) architecture to achieve both speed and intelligence.

    Composer’s training goal was simple but demanding: make the model capable of solving real-world programming challenges in real codebases using actual developer tools. During RL, Composer was given tasks like editing files, running terminal commands, performing semantic searches, or refactoring code. Its objective wasn’t just to get the right answer — it was to work efficiently, using minimal steps, adhering to existing abstractions, and maintaining code quality:contentReference[oaicite:1]{index=1}.

    Training on Real Engineering Environments

    Rather than relying on synthetic datasets or static benchmarks, Cursor trained Composer within a dynamic software environment. Every RL episode simulated an authentic engineering workflow — debugging, writing unit tests, applying linter fixes, and performing large-scale refactors. Over time, Composer developed behaviors that mirror an experienced developer’s workflow. It learned when to open a file, when to search globally, and when to execute a command rather than speculate.

    Cursor’s evaluation framework, Cursor Bench, measures progress by realism rather than abstract metrics. It compiles actual agent requests from engineers and compares Composer’s solutions to human-curated optimal responses. This lets Cursor measure not just correctness, but also how well the model respects a team’s architecture, naming conventions, and software practices — metrics that matter in production environments.

    Reinforcement Learning as a Performance Engine

    Reinforcement learning is at the heart of Composer’s performance. Unlike supervised fine-tuning, which simply mimics examples, RL rewards Composer for producing high-quality, efficient, and contextually relevant work. It actively learns to choose the right tools, minimize unnecessary output, and exploit parallelism across tasks. The model was even rewarded for avoiding unsupported claims — pushing it to generate more verifiable and responsible code suggestions.

    As RL progressed, emergent behaviors appeared. Composer began autonomously running semantic searches to explore codebases, fixing linter errors, and even generating and executing tests to validate its own work. These self-taught habits transformed it from a passive text generator into an active agent capable of iterative reasoning.

    Infrastructure at Scale: Thousands of Sandboxed Agents

    Behind Composer’s intelligence is a massive engineering effort. Training large MoE models efficiently requires significant parallelization and precision management. Cursor’s infrastructure, built with PyTorch and Ray, powers asynchronous RL at scale. Their system supports thousands of simultaneous environments, each a sandboxed virtual workspace where Composer experiments safely with file edits, code execution, and search queries.

    To achieve this scale, the team integrated MXFP8 MoE kernels with expert and hybrid-sharded data parallelism. This setup allows distributed training across thousands of NVIDIA GPUs with minimal communication cost — effectively combining speed, scale, and precision. MXFP8 also enables faster inference without any need for post-training quantization, giving developers real-world performance gains instantly.

    Cursor’s infrastructure can spawn hundreds of thousands of concurrent sandboxed coding environments. This capability, adapted from their Background Agents system, was essential to unify RL experiments with production-grade conditions. It ensures that Composer’s training environment matches the complexity of real-world coding, creating a model genuinely optimized for developer workflows.

    The Cursor Bench and What “Frontier” Means

    Composer’s benchmark performance earned it a place in what Cursor calls the “Fast Frontier” class — models designed for efficient inference while maintaining top-tier quality. This group includes systems like Haiku 4.5 and Gemini Flash 2.5. While GPT-5 and Sonnet 4.5 remain the strongest overall, Composer outperforms nearly every open-weight model, including Qwen Coder and GLM 4.6:contentReference[oaicite:2]{index=2}. In tokens-per-second performance, Composer’s throughput is among the highest ever measured under the standardized Anthropic tokenizer.

    Built by Developers, for Developers

    Composer isn’t just research — it’s in daily use inside Cursor. Engineers rely on it for their own development, using it to edit code, manage large repositories, and explore unfamiliar projects. This internal dogfooding loop means Composer is constantly tested and improved in real production contexts. Its success is measured by one thing: whether it helps developers get more done, faster, and with fewer interruptions.

    Cursor’s goal isn’t to replace developers, but to enhance them — providing an assistant that acts as an extension of their workflow. By combining fast inference, contextual understanding, and reinforcement learning, Composer turns AI from a static completion tool into a real collaborator.

    Wrap Up

    Composer represents a milestone in AI-assisted software engineering. It demonstrates that reinforcement learning, when applied at scale with the right infrastructure and metrics, can produce agents that are not only faster but also more disciplined, efficient, and trustworthy. For developers, it’s a step toward a future where coding feels as seamless and interactive as conversation — powered by an agent that truly understands how to build software.

  • Extropic’s Thermodynamic Revolution: 10,000x More Efficient AI That Could Smash the Energy Wall

    Artificial intelligence is about to hit an energy wall. As data centers devour gigawatts to power models like GPT-4, the cost of computation is scaling faster than our ability to produce electricity. Extropic Corporation, a deep-tech startup founded three years ago, believes it has found a way through that wall — by reinventing the computer itself. Their new class of thermodynamic hardware could make generative AI up to 10,000× more energy-efficient than today’s GPUs:contentReference[oaicite:0]{index=0}.

    From GPUs to TSUs: The End of the Hardware Lottery

    Modern AI runs on GPUs — chips originally designed for graphics rendering, not probabilistic reasoning. Each floating-point operation burns precious joules moving data across silicon. Extropic argues that this design is fundamentally mismatched to the needs of modern AI, which is probabilistic by nature. Instead of computing exact results, generative models sample from vast probability spaces. The company’s solution is the Thermodynamic Sampling Unit (TSU) — a chip that doesn’t process numbers, but samples from probability distributions directly:contentReference[oaicite:1]{index=1}.

    TSUs are built entirely from standard CMOS transistors, meaning they can scale using existing semiconductor fabs. Unlike exotic academic approaches that require magnetic junctions or optical randomness, Extropic’s design uses the natural thermal noise of transistors as its source of entropy. This turns what engineers usually fight to suppress — noise — into the very fuel for computation.

    X0 and XTR-0: The Birth of a New Computing Platform

    Extropic’s first hardware platform, XTR-0 (Experimental Testing & Research Platform 0), combines a CPU, FPGA, and sockets for daughterboards containing early test chips called X0. X0 proved that all-transistor probabilistic circuits can generate programmable randomness at scale. These chips perform operations like sampling from Bernoulli, Gaussian, or categorical distributions — the building blocks of probabilistic AI:contentReference[oaicite:2]{index=2}.

    The company’s pbit circuit acts like an electronic coin flipper, generating millions of biased random bits per second using 10,000× less energy than a GPU’s floating-point addition. Higher-order circuits like pdit (categorical sampler), pmode (Gaussian sampler), and pMoG (mixture-of-Gaussians generator) expand the toolkit, enabling full probabilistic models to be implemented natively in silicon. Together, these circuits form the foundation of the TSU architecture — a physical embodiment of energy-based computation:contentReference[oaicite:3]{index=3}.

    The Denoising Thermodynamic Model (DTM): Diffusion Without the Energy Bill

    Hardware alone isn’t enough. Extropic also introduced a new AI algorithm built specifically for TSUs — the Denoising Thermodynamic Model (DTM). Inspired by diffusion models like Stable Diffusion, DTMs chain together multiple energy-based models that gradually denoise data over time. This architecture avoids the “mixing–expressivity trade-off” that plagues traditional EBMs, making them both scalable and efficient:contentReference[oaicite:4]{index=4}.

    In simulations, DTMs running on modeled TSUs matched GPU-based diffusion models on image-generation benchmarks like Fashion-MNIST — while consuming roughly one ten-thousandth the energy. That’s the difference between joules and picojoules per image. The company’s open-source library, thrml, lets researchers simulate TSUs today, and even replicate the paper’s results on a GPU before the chips ship.

    The Physics of Intelligence: Turning Noise Into Computation

    At the heart of thermodynamic computing is a radical idea: computation as a physical relaxation process. Instead of enforcing digital determinism, TSUs let physical systems settle into low-energy configurations that correspond to probable solutions. This isn’t metaphorical — the chips literally use thermal fluctuations to perform Gibbs sampling across energy landscapes defined by machine-learned functions:contentReference[oaicite:5]{index=5}.

    In practical terms, it’s like replacing the brute-force precision of a GPU with the subtle statistical behavior of nature itself. Each transistor becomes a tiny particle in a thermodynamic system, collectively simulating the world’s most efficient sampler: reality.

    From Lab Demo to Scalable Platform

    The XTR-0 kit is already in the hands of select researchers, startups, and tinkerers. Its modular design allows easy upgrades to upcoming chips — like Z-1, Extropic’s first production-scale TSU, which will support complex probabilistic machine learning workloads. Eventually, TSUs will integrate directly with conventional accelerators, possibly as PCIe cards or even hybrid GPU-TSU chips:contentReference[oaicite:6]{index=6}.

    Extropic’s roadmap extends beyond AI. Because TSUs efficiently sample from continuous probabilistic systems, they could accelerate simulations in physics, chemistry, and biology — domains that already rely on stochastic processes. The company envisions a world where thermodynamic computing powers climate models, drug discovery, and autonomous reasoning systems, all at a fraction of today’s energy cost.

    Breaking the AI Energy Wall

    Extropic’s October 2025 announcement comes at a pivotal time. Data centers are facing grid bottlenecks across the U.S., and some companies are building nuclear-adjacent facilities just to keep up with AI demand:contentReference[oaicite:7]{index=7}. With energy costs set to define the next decade of AI, a 10,000× improvement in energy efficiency isn’t just an innovation — it’s a revolution.

    If Extropic’s thermodynamic hardware lives up to its promise, it could mark a “zero-to-one” moment for computing — one where the laws of physics, not the limits of silicon, define what’s possible. As the company put it in their launch note: “Once we succeed, energy constraints will no longer limit AI scaling.”

    Read the full technical paper on arXiv and explore the official Extropic site for their thermodynamic roadmap.

  • xAI’s Macrohard: Elon Musk’s AI Answer to Microsoft

    What Is Macrohard?

    xAI’s Macrohard is an AI-powered software company challenging Microsoft. Its name swaps “micro” for “macro” for big ambitions. Elon Musk teased it in 2021 on X: Macrohard >> Microsoft. Now it’s real. Musk says: “The @xAI MACROHARD project will be profoundly impactful at an immense scale. Our goal is a company that can do anything short of making physical objects.”

    MACROHARD logo on xAI supercomputer

    Macrohard features:

    • AI teams: Hundreds of AI agents for coding, images, and testing, acting like humans.
    • Software tools: Apps for automation, content, game design, and human-like chatbots.
    • Power: Runs on xAI’s Colossus supercomputer in Memphis, with millions of GPUs.

    xAI trademarked “Macrohard” on August 1, 2025, for AI software. They’re hiring for “Macrohard / Computer Control” roles.

    “Macrohard uses AI for coding and automation, powered by Grok to build next-level software.” — Grok (xAI’s AI)

    Why Now? Musk vs. Microsoft

    Musk’s feud with Microsoft, tied to their OpenAI investment, drives Macrohard. He’s sued OpenAI over ChatGPT’s iOS exclusivity. With $6B in funding (May 2024), xAI aims to disrupt Microsoft’s software, linking to Tesla and SpaceX.

    X Reactions

    X users are hyped, with memes about the name (in India, it sounds like a curse word). Some call it “the first AI corporation.” Reddit debates if it’s a game-changer.

    What’s Next?

    xAI’s Yuhuai Wu teased hiring for “Grok-5” and Macrohard by late 2025. It could change software development—faster and cheaper. Can it top Microsoft? Comment below!

  • How a Daily Question Made Mara Wiser: A Short Story About Practicing Wisdom

    Mara loved reading about wisdom. Her shelves were packed with Seneca and modern guides that promised enlightenment in neat lists. Still, her life felt unchanged, full of quick reactions and small mistakes.

    One morning, after a tense call with a friend, a line struck her: “No man was ever wise by chance.” She realized she had been consuming wisdom, not living it. So she started an experiment.

    Each day, Mara asked herself one question before she acted.

    • When angry: What is another way to look at this?
    • When unsure: If everyone made this choice, how would it affect the world?
    • When ashamed: Am I moving closer to my values or further away?
    • When judging: Have I done something similar before, and what was going on for me then?

    The questions did not fix everything at once, but they created a pause. In that pause, she noticed how fear tinted her thoughts, how her words drifted from her values, and how a caring interpretation could soften a hard moment.

    Weeks became months. She still stumbled, but less often. When her friend called again, they spoke with honesty and care. After the call, Mara realized something had shifted. She was no longer chasing wisdom on a page. She was practicing it, choice by choice.

    That is how wisdom grows: not by chance, but by action.

  • Dwarkesh Patel: From Podcasting Prodigy to AI Chronicler with The Scaling Era

    TLDW (Too Long; Didn’t Watch)

    Dwarkesh Patel, a 24-year-old podcasting sensation, has made waves with his deep, unapologetically intellectual interviews on science, history, and technology. In a recent Core Memory Podcast episode hosted by Ashlee Vance, Patel announced his new book, The Scaling Era: An Oral History of AI, co-authored with Gavin Leech and published by Stripe Press. Released digitally on March 25, 2025, with a hardcover to follow in July, the book compiles insights from AI luminaries like Mark Zuckerberg and Satya Nadella, offering a vivid snapshot of the current AI revolution. Patel’s journey from a computer science student to a chronicler of the AI age, his optimistic vision for a future enriched by artificial intelligence, and his reflections on podcasting as a tool for learning and growth take center stage in this engaging conversation.


    At just 24, Dwarkesh Patel has carved out a unique niche in the crowded world of podcasting. Known for his probing interviews with scientists, historians, and tech pioneers, Patel refuses to pander to short attention spans, instead diving deep into complex topics with a gravitas that belies his age. On March 25, 2025, he joined Ashlee Vance on the Core Memory Podcast to discuss his life, his meteoric rise, and his latest venture: a book titled The Scaling Era: An Oral History of AI, published by Stripe Press. The episode, recorded in Patel’s San Francisco studio, offers a window into the mind of a young intellectual who’s become a key voice in documenting the AI revolution.

    Patel’s podcasting career began as a side project while he was a computer science student at the University of Texas. What started with interviews of economists like Bryan Caplan and Tyler Cowen has since expanded into a platform—the Lunar Society—that tackles everything from ancient DNA to military history. But it’s his focus on artificial intelligence that has garnered the most attention in recent years. Having interviewed the likes of Dario Amodei, Satya Nadella, and Mark Zuckerberg, Patel has positioned himself at the epicenter of the AI boom, capturing the thoughts of the field’s biggest players as large language models reshape the world.

    The Scaling Era, co-authored with Gavin Leech, is the culmination of these efforts. Released digitally on March 25, 2025, with a print edition slated for July, the book stitches together Patel’s interviews into a cohesive narrative, enriched with commentary, footnotes, and charts. It’s an oral history of what Patel calls the “scaling era”—the period where throwing more compute and data at AI models has yielded astonishing, often mysterious, leaps in capability. “It’s one of those things where afterwards, you can’t get the sense of how people were thinking about it at the time,” Patel told Vance, emphasizing the book’s value as a time capsule of this pivotal moment.

    The process of creating The Scaling Era was no small feat. Patel credits co-author Leech and editor Rebecca for helping weave disparate perspectives—from computer scientists to primatologists—into a unified story. The first chapter, for instance, explores why scaling works, drawing on insights from AI researchers, neuroscientists, and anthropologists. “Seeing all these snippets next to each other was a really fun experience,” Patel said, highlighting how the book connects dots he’d overlooked in his standalone interviews.

    Beyond the book, the podcast delves into Patel’s personal story. Born in India, he moved to the U.S. at age eight, bouncing between rural states like North Dakota and West Texas as his father, a doctor on an H1B visa, took jobs where domestic talent was scarce. A high school debate star—complete with a “chiseled chin” and concise extemp speeches—Patel initially saw himself heading toward a startup career, dabbling in ideas like furniture resale and a philosophy-inspired forum called PopperPlay (a name he later realized had unintended connotations). But it was podcasting that took off, transforming from a gap-year experiment into a full-fledged calling.

    Patel’s optimism about AI shines through in the conversation. He envisions a future where AI eliminates scarcity, not just of material goods but of experiences—think aesthetics, peak human moments, and interstellar exploration. “I’m a transhumanist,” he admitted, advocating for a world where humanity integrates with AI to unlock vast potential. He predicts AI task horizons doubling every seven months, potentially leading to “discontinuous” economic impacts within 18 months if models master computer use and reinforcement learning (RL) environments. Yet he remains skeptical of a “software-only singularity,” arguing that physical bottlenecks—like chip manufacturing—will temper the pace of progress, requiring a broader tech stack upgrade akin to building an iPhone in 1900.

    On the race to artificial general intelligence (AGI), Patel questions whether the first lab to get there will dominate indefinitely. He points to fast-follow dynamics—where breakthroughs are quickly replicated at lower cost—and the coalescing approaches of labs like xAI, OpenAI, and Anthropic. “The cost of training these models is declining like 10x a year,” he noted, suggesting a future where AGI becomes commodified rather than monopolized. He’s cautiously optimistic about safety, too, estimating a 10-20% “P(doom)” (probability of catastrophic outcomes) but arguing that current lab leaders are far better than alternatives like unchecked nationalized efforts or a reckless trillion-dollar GPU hoard.

    Patel’s influences—like economist Tyler Cowen, who mentored him early on—and unexpected podcast hits—like military historian Sarah Paine—round out the episode. Paine, a Naval War College scholar whose episodes with Patel have exploded in popularity, exemplifies his knack for spotlighting overlooked brilliance. “You really don’t know what’s going to be popular,” he mused, advocating for following personal curiosity over chasing trends.

    Looking ahead, Patel aims to make his podcast the go-to place for understanding the AI-driven “explosive growth” he sees coming. Writing, though a struggle, will play a bigger role as he refines his takes. “I want it to become the place where… you come to make sense of what’s going on,” he said. In a world often dominated by shallow content, Patel’s commitment to depth and learning stands out—a beacon for those who’d rather grapple with big ideas than scroll through 30-second blips.

  • How AI is Revolutionizing Writing: Insights from Tyler Cowen and David Perell

    TLDW/TLDR

    Tyler Cowen, an economist and writer, shares practical ways AI transforms writing and research in a conversation with David Perell. He uses AI daily as a “secondary literature” tool to enhance reading and podcast prep, predicts fewer books due to AI’s rapid evolution, and emphasizes the enduring value of authentic, human-centric writing like memoirs and personal narratives.

    Detailed Summary of Video

    In a 68-minute YouTube conversation uploaded on March 5, 2025, economist Tyler Cowen joins writer David Perell to explore AI’s impact on writing and research. Cowen details his daily AI use—replacing stacks of books with large language models (LLMs) like o1 Pro, Claude, and DeepSeek for podcast prep and leisure reading, such as Shakespeare and Wuthering Heights. He highlights AI’s ability to provide context quickly, reducing hallucinations in top models by over tenfold in the past year (as of February 2025).

    The discussion shifts to writing: Cowen avoids AI for drafting to preserve his unique voice, though he uses it for legal background or critiquing drafts (e.g., spotting obnoxious tones). He predicts fewer books as AI outpaces long-form publishing cycles, favoring high-frequency formats like blogs or Substack. However, he believes “truly human” works—memoirs, biographies, and personal experience-based books—will persist, as readers crave authenticity over AI-generated content.

    Cowen also sees AI decentralizing into a “Republic of Science,” with models self-correcting and collaborating, though this remains speculative. For education, he integrates AI into his PhD classes, replacing textbooks with subscriptions to premium models. He warns academia lags in adapting, predicting AI will outstrip researchers in paper production within two years. Perell shares his use of AI for Bible study, praising its cross-referencing but noting experts still excel at pinpointing core insights.

    Practical tips emerge: use top-tier models (o1 Pro, Claude, DeepSeek), craft detailed prompts, and leverage AI for travel or data visualization. Cowen also plans an AI-written biography by “open-sourcing” his life via blog posts, showcasing AI’s potential to compile personal histories.

    Article Itself

    How AI is Revolutionizing Writing: Insights from Tyler Cowen and David Perell

    Artificial Intelligence is no longer a distant sci-fi dream—it’s a tool reshaping how we write, research, and think. In a recent YouTube conversation, economist Tyler Cowen and writer David Perell unpack the practical implications of AI for writers, offering a roadmap for navigating this seismic shift. Recorded on March 5, 2025, their discussion blends hands-on advice with bold predictions, grounded in Cowen’s daily AI use and Perell’s curiosity about its creative potential.

    Cowen, a prolific author and podcaster, doesn’t just theorize about AI—he lives it. He’s swapped towering stacks of secondary literature for LLMs like o1 Pro, Claude, and DeepSeek. Preparing for a podcast on medieval kings Richard II and Henry V, he once ordered 20-30 books; now, he interrogates AI for context, cutting prep time and boosting quality. “It’s more fun,” he says, describing how he queries AI about Shakespearean puzzles or Wuthering Heights chapters, treating it as a conversational guide. Hallucinations? Not a dealbreaker—top models have slashed errors dramatically since 2024, and as an interviewer, he prioritizes context over perfect accuracy.

    For writing, Cowen draws a line: AI informs, but doesn’t draft. His voice—cryptic, layered, parable-like—remains his own. “I don’t want the AI messing with that,” he insists, rejecting its smoothing tendencies. Yet he’s not above using it tactically—checking legal backgrounds for columns or flagging obnoxious tones in drafts (a tip from Agnes Callard). Perell nods, noting AI’s knack for softening managerial critiques, though Cowen prefers his weirdness intact.

    The future of writing, Cowen predicts, is bifurcated. Books, with their slow cycles, face obsolescence—why write a four-year predictive tome when AI evolves monthly? He’s shifted to “ultra high-frequency” outputs like blogs and Substack, tackling AI’s rapid pace. Yet “truly human” writing—memoirs, biographies, personal narratives—will endure. Readers, he bets, want authenticity over AI’s polished slop. His next book, Mentors, leans into this, drawing on lived experience AI can’t replicate.

    Perell, an up-and-coming writer, feels the tension. AI’s prowess deflates his hard-earned skills, yet he’s excited to master it. He uses it to study the Bible, marveling at its cross-referencing, though it lacks the human knack for distilling core truths. Both agree: AI’s edge lies in specifics—detailed prompts yield gold, vague ones yield “mid” mush. Cowen’s tip? Imagine prompting an alien, not a human—literal, clear, context-rich.

    Educationally, Cowen’s ahead of the curve. His PhD students ditch textbooks for AI subscriptions, weaving it into papers to maximize quality. He laments academia’s inertia—AI could outpace researchers in two years, yet few adapt. Perell’s takeaway? Use the best models. “You’re hopeless without o1 Pro,” Cowen warns, highlighting the gap between free and cutting-edge tools.

    Beyond writing, AI’s horizon dazzles. Cowen envisions a decentralized “Republic of Science,” where models self-correct and collaborate, mirroring human progress. Large context windows (Gemini’s 2 million tokens, soon 10-20 million) will decode regulatory codes and historical archives, birthing jobs in data conversion. Inside companies, he suspects AI firms lead secretly, turbocharging their own models.

    Practically, Cowen’s stack—o1 Pro for queries, Claude for thoughtful prose, DeepSeek for wild creativity, Perplexity for citations—offers a playbook. He even plans an AI-crafted biography, “open-sourcing” his life via blog posts about childhood in Fall River or his dog, Spinosa. It’s low-cost immortality, a nod to AI’s archival power.

    For writers, the message is clear: adapt or fade. AI won’t just change writing—it’ll redefine what it means to create. Human quirks, stories, and secrets will shine amid the deluge of AI content. As Cowen puts it, “The truly human books will stand out all the more.” The revolution’s here—time to wield it.

  • Why Every Nation Needs Its Own AI Strategy: Insights from Jensen Huang & Arthur Mensch

    In a world where artificial intelligence (AI) is reshaping economies, cultures, and security, the stakes for nations have never been higher. In a recent episode of The a16z Podcast, Jensen Huang, CEO of NVIDIA, and Arthur Mensch, co-founder and CEO of Mistral, unpack the urgent need for sovereign AI—national strategies that ensure countries control their digital futures. Drawing from their discussion, this article explores why every nation must prioritize AI, the economic and cultural implications, and practical steps to build a robust strategy.

    The Global Race for Sovereign AI

    The conversation kicks off with a powerful idea: AI isn’t just about computing—it’s about culture, economics, and sovereignty. Huang stresses that no one will prioritize a nation’s unique needs more than the nation itself. “Nobody’s going to care more about the Swedish culture… than Sweden,” he says, highlighting the risk of digital dependence on foreign powers. Mensch echoes this, framing AI as a tool nations must wield to avoid modern digital colonialization—where external entities dictate a country’s technological destiny.

    AI as a General-Purpose Technology

    Mensch positions AI as a transformative force, comparable to electricity or the internet, with applications spanning agriculture, healthcare, defense, and beyond. Yet Huang cautions against waiting for a universal solution from a single provider. “Intelligence is for everyone,” he asserts, urging nations to tailor AI to their languages, values, and priorities. Mistral’s M-Saaba model, optimized for Arabic, exemplifies this—outperforming larger models by focusing on linguistic and cultural specificity.

    Economic Implications: A Game-Changer for GDP

    The economic stakes are massive. Mensch predicts AI could boost GDP by double digits for countries that invest wisely, warning that laggards will see wealth drain to tech-forward neighbors. Huang draws a parallel to the electricity era: nations that built their own grids prospered, while others became reliant. For leaders, this means securing chips, data centers, and talent to capture AI’s economic potential—a must for both large and small nations.

    Cultural Infrastructure and Digital Workforce

    Huang introduces a compelling metaphor: AI as a “digital workforce” that nations must onboard, train, and guide, much like human employees. This workforce should embody local values and laws, something no outsider can fully replicate. Mensch adds that AI’s ability to produce content—text, images, voice—makes it a social construct, deeply tied to a nation’s identity. Without control, countries risk losing their cultural sovereignty to centralized models reflecting foreign biases.

    Open-Source vs. Closed AI: A Path to Independence

    Both Huang and Mensch advocate for open-source AI as a cornerstone of sovereignty. Mensch explains that models like Mistral Nemo, developed with NVIDIA, empower nations to deploy AI on their own infrastructure, free from closed-system dependency. Open-source also fuels innovation—Mistral’s releases spurred Meta and others to follow suit. Huang highlights its role in niche markets like healthcare and mining, plus its security edge: global scrutiny makes open models safer than opaque alternatives.

    Risks and Challenges of AI Adoption

    Leaders often worry about public backlash—will AI replace jobs? Mensch suggests countering this by upskilling citizens and showcasing practical benefits, like France’s AI-driven unemployment agency connecting workers to opportunities. Huang sees AI as “the greatest equalizer,” noting more people use ChatGPT than code in C++, shrinking the tech divide. Still, both acknowledge the initial hurdle: setting up AI systems is tough, though improving tools make it increasingly manageable.

    Building a National AI Strategy

    Huang and Mensch offer a blueprint for action:

    • Talent: Train a local workforce to customize AI systems.
    • Infrastructure: Secure chips from NVIDIA and software from partners like Mistral.
    • Customization: Adapt open-source models with local data and culture.
    • Vision: Prepare for agentic and physical AI breakthroughs in manufacturing and science.

    Huang predicts the next decade will bring AI that thinks, acts, and understands physics—revolutionizing industries vital to emerging markets, from energy to manufacturing.

    Why It’s Urgent

    The podcast ends with a clarion call: AI is “the most consequential technology of all time,” and nations must act now. Huang urges leaders to engage actively, not just admire from afar, while Mensch emphasizes education and partnerships to safeguard economic and cultural futures. For more, follow Jensen Huang (@nvidia) and Arthur Mensch (@arthurmensch) on X, or visit NVIDIA and Mistral’s websites.