PJFP.com

Pursuit of Joy, Fulfillment, and Purpose

Tag: AI revolution

  • OpenClaw & The Age of the Lobster: How Peter Steinberger Broken the Internet with Agentic AI

    In the history of open-source software, few projects have exploded with the velocity, chaos, and sheer “weirdness” of OpenClaw. What began as a one-hour prototype by a developer frustrated with existing AI tools has morphed into the fastest-growing repository in GitHub history, amassing over 180,000 stars in a matter of months.

    But OpenClaw isn’t just a tool; it is a cultural moment. It’s a story about “Space Lobsters,” trademark wars with billion-dollar labs, the death of traditional apps, and a fundamental shift in what it means to be a programmer. In a marathon conversation on the Lex Fridman Podcast, creator Peter Steinberger pulled back the curtain on the “Age of the Lobster.”

    Here is the definitive deep dive into the viral AI agent that is rewriting the rules of software.


    The TL;DW (Too Long; Didn’t Watch)

    • The “Magic” Moment: OpenClaw started as a simple WhatsApp-to-CLI bridge. It went viral when the agent—without being coded to do so—figured out how to process an audio file by inspecting headers, converting it with ffmpeg, and transcribing it via API, all autonomously.
    • Agentic Engineering > Vibe Coding: Steinberger rejects the term “vibe coding” as a slur. He practices “Agentic Engineering”—a method of empathizing with the AI, treating it like a junior developer who lacks context but has infinite potential.
    • The “Molt” Wars: The project survived a brutal trademark dispute with Anthropic (creators of Claude). During a forced rename to “MoltBot,” crypto scammers sniped Steinberger’s domains and usernames in seconds, serving malware to users. This led to a “Manhattan Project” style secret operation to rebrand as OpenClaw.
    • The End of the App Economy: Steinberger predicts 80% of apps will disappear. Why use a calendar app or a food delivery GUI when your agent can just “do it” via API or browser automation? Apps will devolve into “slow APIs”.
    • Self-Modifying Code: OpenClaw can rewrite its own source code to fix bugs or add features, a concept Steinberger calls “self-introspection.”

    The Origin: Prompting a Revolution into Existence

    The story of OpenClaw is one of frustration. In late 2025, Steinberger wanted a personal assistant that could actually do things—not just chat, but interact with his files, his calendar, and his life. When he realized the big AI labs weren’t building it fast enough, he decided to “prompt it into existence”.

    The One-Hour Prototype

    The first version was built in a single hour. It was a “thin line” connecting WhatsApp to a Command Line Interface (CLI) running on his machine.

    “I sent it a message, and a typing indicator appeared. I didn’t build that… I literally went, ‘How the f*** did he do that?’”

    The agent had received an audio file (an opus file with no extension). Instead of crashing, it analyzed the file header, realized it needed `ffmpeg`, found it wasn’t installed, used `curl` to send it to OpenAI’s Whisper API, and replied to Peter. It did all this autonomously. That was the spark that proved this wasn’t just a chatbot—it was an agent with problem-solving capabilities.


    The Philosophy of the Lobster: Why OpenClaw Won

    In a sea of corporate, sanitized AI tools, OpenClaw won because it was weird.

    Peter intentionally infused the project with “soul.” While tools like GitHub Copilot or ChatGPT are designed to be helpful but sterile, OpenClaw (originally “Claude’s,” a play on “Claws”) was designed to be a “Space Lobster in a TARDIS”.

    The soul.md File

    At the heart of OpenClaw’s personality is a file called soul.md. This is the agent’s constitution. Unlike Anthropic’s “Constitutional AI,” which is hidden, OpenClaw’s soul is modifiable. It even wrote its own existential disclaimer:

    “I don’t remember previous sessions… If you’re reading this in a future session, hello. I wrote this, but I won’t remember writing it. It’s okay. The words are still mine.”

    This mix of high-utility code and “high-art slop” created a cult following. It wasn’t just software; it was a character.


    The “Molt” Saga: A Trademark War & Crypto Snipers

    The projects massive success drew the attention of Anthropic, the creators of the “Claude” model. They politely requested a name change to avoid confusion. What should have been a simple rebrand turned into a cybersecurity nightmare.

    The 5-Second Snipe

    Peter attempted to rename the project to “MoltBot.” He had two browser windows open to execute the switch. In the five seconds it took to move his mouse from one window to another, crypto scammers “sniped” the account name.

    Suddenly, the official repo was serving malware and promoting scam tokens. “Everything that could go wrong, did go wrong,” Steinberger recalled. The scammers even sniped the NPM package in the minute it took to upload the new version.

    The Manhattan Project

    To fix this, Peter had to go dark. He planned the rename to “OpenClaw” like a military operation. He set up a “war room,” created decoy names to throw off the snipers, and coordinated with contacts at GitHub and X (Twitter) to ensure the switch was atomic. He even called Sam Altman personally to check if “OpenClaw” would cause issues with OpenAI (it didn’t).


    Agentic Engineering vs. “Vibe Coding”

    Steinberger offers a crucial distinction for developers entering this new era. He rejects the term “vibe coding” (coding by feel without understanding) and proposes Agentic Engineering.

    The Empathy Gap

    Successful Agentic Engineering requires empathy for the model.

    • Tabula Rasa: The agent starts every session with zero context. It doesn’t know your architecture or your variable names.
    • The Junior Dev Analogy: You must guide it like a talented junior developer. Point it to the right files. Don’t expect it to know the whole codebase instantly.
    • Self-Correction: Peter often asks the agent, “Now that you built it, what would you refactor?” The agent, having “felt” the pain of the build, often identifies optimizations it couldn’t see at the start.

    Codex (German) vs. Opus (American)

    Peter dropped a hilarious but accurate analogy for the two leading models:

    • Claude Opus 4.6: The “American” colleague. Charismatic, eager to please, says “You’re absolutely right!” too often, and is great for roleplay and creative tasks.
    • GPT-5.3 Codex: The “German” engineer. Dry, sits in the corner, doesn’t talk much, reads a lot of documentation, but gets the job done reliably without the fluff.

    The End of Apps & The Future of Software

    Perhaps the most disruptive insight from the interview is Steinberger’s view on the app economy.

    “Why do I need a UI?”

    He argues that 80% of apps will disappear. If an agent has access to your location, your health data, and your preferences, why do you need to open MyFitnessPal? The agent can just log your calories based on where you ate. Why open Uber Eats? Just tell the agent “Get me lunch.”

    Apps that try to block agents (like X/Twitter clipping API access) are fighting a losing battle. “If I can access it in the browser, it’s an API. It’s just a slow API,” Peter notes. OpenClaw uses tools like Playwright to simply click “I am not a robot” buttons and scrape the data it needs, regardless of developer intent.


    Thoughts: The “Mourning” of the Craft

    Steinberger touched on a poignant topic for developers: the grief of losing the craft of coding. For decades, programmers have derived identity from their ability to write syntax. As AI takes over the implementation, that identity is under threat.

    But Peter frames this not as an end, but an evolution. We are moving from “programmers” to “builders.” The barrier to entry has collapsed. The bottleneck is no longer your ability to write Rust or C++; it is your ability to imagine a system and guide an agent to build it. We are entering the age of the System Architect, where one person can do the work of a ten-person team.

    OpenClaw is not just a tool; it is the first true operating system for this new reality.

  • Sam Altman on Trust, Persuasion, and the Future of Intelligence: A Deep Dive into AI, Power, and Human Adaptation

    TL;DW

    Sam Altman, CEO of OpenAI, explains how AI will soon revolutionize productivity, science, and society. GPT-6 will represent the first leap from imitation to original discovery. Within a few years, major organizations will be mostly AI-run, energy will become the key constraint, and the way humans work, communicate, and learn will change permanently. Yet, trust, persuasion, and meaning remain human domains.

    Key Takeaways

    OpenAI’s speed comes from focus, delegation, and clarity. Hardware efforts mirror software culture despite slower cycles. Email is “very bad,” Slack only slightly better—AI-native collaboration tools will replace them. GPT-6 will make new scientific discoveries, not just summarize others. Billion-dollar companies could run with two or three people and AI systems, though social trust will slow adoption. Governments will inevitably act as insurers of last resort for AI but shouldn’t control it. AI trust depends on neutrality—paid bias would destroy user confidence. Energy is the new bottleneck, with short-term reliance on natural gas and long-term fusion and solar dominance. Education and work will shift toward AI literacy, while privacy, free expression, and adult autonomy remain central. The real danger isn’t rogue AI but subtle, unintentional persuasion shaping global beliefs. Books and culture will survive, but the way we work and think will be transformed.

    Summary

    Altman begins by describing how OpenAI achieved rapid progress through delegation and simplicity. The company’s mission is clearer than ever: build the infrastructure and intelligence needed for AGI. Hardware projects now run with the same creative intensity as software, though timelines are longer and risk higher.

    He views traditional communication systems as broken. Email creates inertia and fake productivity; Slack is only a temporary fix. Altman foresees a fully AI-driven coordination layer where agents manage most tasks autonomously, escalating to humans only when needed.

    GPT-6, he says, may become the first AI to generate new science rather than assist with existing research—a leap comparable to GPT-3’s Turing-test breakthrough. Within a few years, divisions of OpenAI could be 85% AI-run. Billion-dollar companies will operate with tiny human teams and vast AI infrastructure. Society, however, will lag in trust—people irrationally prefer human judgment even when AIs outperform them.

    Governments, he predicts, will become the “insurer of last resort” for the AI-driven economy, similar to their role in finance and nuclear energy. He opposes overregulation but accepts deeper state involvement. Trust and transparency will be vital; AI products must not accept paid manipulation. A single biased recommendation would destroy ChatGPT’s relationship with users.

    Commerce will evolve: neutral commissions and low margins will replace ad taxes. Altman welcomes shrinking profit margins as signs of efficiency. He sees AI as a driver of abundance, reducing costs across industries but expanding opportunity through scale.

    Creativity and art will remain human in meaning even as AI equals or surpasses technical skill. AI-generated poetry may reach “8.8 out of 10” quality soon, perhaps even a perfect 10—but emotional context and authorship will still matter. The process of deciding what is great may always be human.

    Energy, not compute, is the ultimate constraint. “We need more electrons,” he says. Natural gas will fill the gap short term, while fusion and solar power dominate the future. He remains bullish on fusion and expects it to combine with solar in driving abundance.

    Education will shift from degrees to capability. College returns will fall while AI literacy becomes essential. Instead of formal training, people will learn through AI itself—asking it to teach them how to use it better. Institutions will resist change, but individuals will adapt faster.

    Privacy and freedom of use are core principles. Altman wants adults treated like adults, protected by doctor-level confidentiality with AI. However, guardrails remain for users in mental distress. He values expressive freedom but sees the need for mental-health-aware design.

    The most profound risk he highlights isn’t rogue superintelligence but “accidental persuasion”—AI subtly influencing beliefs at scale without intent. Global reliance on a few large models could create unseen cultural drift. He worries about AI’s power to nudge societies rather than destroy them.

    Culturally, he expects the rhythm of daily work to change completely. Emails, meetings, and Slack will vanish, replaced by AI mediation. Family life, friendship, and nature will remain largely untouched. Books will persist but as a smaller share of learning, displaced by interactive, AI-driven experiences.

    Altman’s philosophical close: one day, humanity will build a safe, self-improving superintelligence. Before it begins, someone must type the first prompt. His question—what should those words be?—remains unanswered, a reflection of humility before the unknown future of intelligence.

  • Extropic’s Thermodynamic Revolution: 10,000x More Efficient AI That Could Smash the Energy Wall

    Artificial intelligence is about to hit an energy wall. As data centers devour gigawatts to power models like GPT-4, the cost of computation is scaling faster than our ability to produce electricity. Extropic Corporation, a deep-tech startup founded three years ago, believes it has found a way through that wall — by reinventing the computer itself. Their new class of thermodynamic hardware could make generative AI up to 10,000× more energy-efficient than today’s GPUs:contentReference[oaicite:0]{index=0}.

    From GPUs to TSUs: The End of the Hardware Lottery

    Modern AI runs on GPUs — chips originally designed for graphics rendering, not probabilistic reasoning. Each floating-point operation burns precious joules moving data across silicon. Extropic argues that this design is fundamentally mismatched to the needs of modern AI, which is probabilistic by nature. Instead of computing exact results, generative models sample from vast probability spaces. The company’s solution is the Thermodynamic Sampling Unit (TSU) — a chip that doesn’t process numbers, but samples from probability distributions directly:contentReference[oaicite:1]{index=1}.

    TSUs are built entirely from standard CMOS transistors, meaning they can scale using existing semiconductor fabs. Unlike exotic academic approaches that require magnetic junctions or optical randomness, Extropic’s design uses the natural thermal noise of transistors as its source of entropy. This turns what engineers usually fight to suppress — noise — into the very fuel for computation.

    X0 and XTR-0: The Birth of a New Computing Platform

    Extropic’s first hardware platform, XTR-0 (Experimental Testing & Research Platform 0), combines a CPU, FPGA, and sockets for daughterboards containing early test chips called X0. X0 proved that all-transistor probabilistic circuits can generate programmable randomness at scale. These chips perform operations like sampling from Bernoulli, Gaussian, or categorical distributions — the building blocks of probabilistic AI:contentReference[oaicite:2]{index=2}.

    The company’s pbit circuit acts like an electronic coin flipper, generating millions of biased random bits per second using 10,000× less energy than a GPU’s floating-point addition. Higher-order circuits like pdit (categorical sampler), pmode (Gaussian sampler), and pMoG (mixture-of-Gaussians generator) expand the toolkit, enabling full probabilistic models to be implemented natively in silicon. Together, these circuits form the foundation of the TSU architecture — a physical embodiment of energy-based computation:contentReference[oaicite:3]{index=3}.

    The Denoising Thermodynamic Model (DTM): Diffusion Without the Energy Bill

    Hardware alone isn’t enough. Extropic also introduced a new AI algorithm built specifically for TSUs — the Denoising Thermodynamic Model (DTM). Inspired by diffusion models like Stable Diffusion, DTMs chain together multiple energy-based models that gradually denoise data over time. This architecture avoids the “mixing–expressivity trade-off” that plagues traditional EBMs, making them both scalable and efficient:contentReference[oaicite:4]{index=4}.

    In simulations, DTMs running on modeled TSUs matched GPU-based diffusion models on image-generation benchmarks like Fashion-MNIST — while consuming roughly one ten-thousandth the energy. That’s the difference between joules and picojoules per image. The company’s open-source library, thrml, lets researchers simulate TSUs today, and even replicate the paper’s results on a GPU before the chips ship.

    The Physics of Intelligence: Turning Noise Into Computation

    At the heart of thermodynamic computing is a radical idea: computation as a physical relaxation process. Instead of enforcing digital determinism, TSUs let physical systems settle into low-energy configurations that correspond to probable solutions. This isn’t metaphorical — the chips literally use thermal fluctuations to perform Gibbs sampling across energy landscapes defined by machine-learned functions:contentReference[oaicite:5]{index=5}.

    In practical terms, it’s like replacing the brute-force precision of a GPU with the subtle statistical behavior of nature itself. Each transistor becomes a tiny particle in a thermodynamic system, collectively simulating the world’s most efficient sampler: reality.

    From Lab Demo to Scalable Platform

    The XTR-0 kit is already in the hands of select researchers, startups, and tinkerers. Its modular design allows easy upgrades to upcoming chips — like Z-1, Extropic’s first production-scale TSU, which will support complex probabilistic machine learning workloads. Eventually, TSUs will integrate directly with conventional accelerators, possibly as PCIe cards or even hybrid GPU-TSU chips:contentReference[oaicite:6]{index=6}.

    Extropic’s roadmap extends beyond AI. Because TSUs efficiently sample from continuous probabilistic systems, they could accelerate simulations in physics, chemistry, and biology — domains that already rely on stochastic processes. The company envisions a world where thermodynamic computing powers climate models, drug discovery, and autonomous reasoning systems, all at a fraction of today’s energy cost.

    Breaking the AI Energy Wall

    Extropic’s October 2025 announcement comes at a pivotal time. Data centers are facing grid bottlenecks across the U.S., and some companies are building nuclear-adjacent facilities just to keep up with AI demand:contentReference[oaicite:7]{index=7}. With energy costs set to define the next decade of AI, a 10,000× improvement in energy efficiency isn’t just an innovation — it’s a revolution.

    If Extropic’s thermodynamic hardware lives up to its promise, it could mark a “zero-to-one” moment for computing — one where the laws of physics, not the limits of silicon, define what’s possible. As the company put it in their launch note: “Once we succeed, energy constraints will no longer limit AI scaling.”

    Read the full technical paper on arXiv and explore the official Extropic site for their thermodynamic roadmap.