PJFP.com

Pursuit of Joy, Fulfillment, and Purpose

  • Boris Cherny Says Coding Is “Solved” — Head of Claude Code Reveals What Comes Next for Software Engineers

    Boris Cherny Says Coding Is "Solved" — Head of Claude Code Reveals What Comes Next for Software Engineers

    Boris Cherny, creator and head of Claude Code at Anthropic, sat down with Lenny Rachitsky on Lenny’s Podcast to drop one of the most consequential interviews in recent tech history. With Claude Code now responsible for 4% of all public GitHub commits — and growing faster every day — Cherny laid out a vision where traditional coding is a solved problem and the real frontier has shifted to idea generation, agentic AI, and a new role he calls the “Builder.”


    TLDW (Too Long; Didn’t Watch)

    Boris Cherny, the head of Claude Code at Anthropic, hasn’t manually written a single line of code since November 2025 — and he ships 10 to 30 pull requests every day. Claude Code now accounts for 4% of all public GitHub commits and is projected to reach 20% by end of 2026. Cherny believes coding as we know it is “solved” and that the future belongs to generalist “Builders” who blend product thinking, design sense, and AI orchestration. He advocates for underfunding teams, giving engineers unlimited tokens, building products for the model six months from now (not today), and following the “bitter lesson” of betting on the most general model. The Cowork product — Anthropic’s agentic tool for non-technical tasks — was built in just 10 days using Claude Code itself. Cherny also revealed three layers of AI safety at Anthropic: mechanistic interpretability, evals, and real-world monitoring.


    Key Takeaways

    1. Claude Code’s Growth Is Staggering

    Claude Code now authors approximately 4% of all public GitHub commits, and Anthropic believes the real number is significantly higher when private repositories are included. Daily active users doubled in the month before this interview, and the growth curve isn’t just rising — it’s accelerating. Semi Analysis predicted Claude Code will reach 20% of all GitHub commits by end of 2026. Claude Code alone is generating roughly $2 billion in revenue, with Anthropic overall at approximately $15 billion.

    2. 100% AI-Written Code Is the New Normal

    Cherny hasn’t manually edited a single line of code since November 2025. He ships 10 to 30 pull requests per day, making him one of the most prolific engineers at Anthropic — all through Claude Code. He still reviews code and maintains human checkpoints, but the actual writing of code is entirely handled by AI. Claude also reviews 100% of pull requests at Anthropic before human review.

    3. Coding Is “Solved” — The Frontier Has Shifted

    In Cherny’s view, coding — at least the kind of programming most engineers do — is a solved problem. The new frontier is idea generation. Claude is already analyzing bug reports and telemetry data to propose its own fixes and suggest what to build next. The shift is from “tool” to “co-worker.” Cherny expects this to become increasingly true across every codebase and tech stack over the coming months.

    4. The Rise of the “Builder” Role

    Traditional role boundaries between engineer, product manager, and designer are dissolving. On the Claude Code team, everyone codes — the PM, the engineering manager, the designer, the finance person, the data scientist. Cherny predicts the title “Software Engineer” will start disappearing by end of 2026, replaced by something like “Builder” — a generalist who blends design sense, business logic, technical orchestration, and user empathy.

    5. Underfunding Teams Is a Feature, Not a Bug

    Cherny advocates deliberately underfunding teams as a strategy. When you assign one engineer to a project instead of five, they’re forced to leverage Claude Code to automate everything possible. This isn’t about cost-cutting — it’s about forcing innovation through constraint. The results at Anthropic have been dramatic: while the engineering team grew roughly 4x, productivity per engineer increased 200% in terms of pull requests shipped.

    6. Give Engineers Unlimited Tokens

    Rather than hiring more headcount, Cherny’s advice to CTOs is to give engineers as many tokens as possible. Let them experiment with the most capable models without worrying about cost. The most innovative ideas come from people pushing AI to its limits. Some Anthropic engineers are spending hundreds of thousands of dollars per month in tokens. Optimize costs later — only after you’ve found the idea that works.

    7. Build for the Model Six Months From Now

    One of Cherny’s most actionable insights: don’t build for today’s model capabilities — build for where the model will be in six months. Early versions of Claude Code only wrote about 20% of Cherny’s code. But the team bet on exponential improvement, and when Opus 4 and Sonnet 4 arrived, product-market fit clicked instantly. This means your product might feel rough at first, but when the next model generation drops, you’ll be perfectly positioned.

    8. The Bitter Lesson Applied to Product

    Cherny references Rich Sutton’s famous “Bitter Lesson” blog post as a core principle for the Claude Code team: the more general model will always outperform the more specific one. In practice, this means avoiding rigid workflows and orchestration scaffolding around AI models. Don’t box the model in. Give it tools, give it a goal, and let it figure out the path. Scaffolding might improve performance 10-20%, but those gains get wiped out with the next model generation.

    9. Latent Demand — The Most Important Product Principle

    Cherny calls latent demand “the single most important principle in product.” The idea: watch how people misuse or hack your product for purposes you didn’t design it for. That’s where your next product lives. Facebook Marketplace came from 40% of Facebook Group posts being buy-and-sell. Cowork came from non-engineers using Claude Code’s terminal for things like growing tomato plants, analyzing genomes, and recovering wedding photos from corrupted hard drives. There’s also a new dimension: watching what the model is trying to do and building tools to make that easier.

    10. Cowork Was Built in 10 Days

    Anthropic’s Cowork product — their agentic tool for non-technical tasks — was implemented by a small team in just 10 days, using Claude Code to build its own virtual machine and security scaffolding. Cowork was immediately a bigger hit than Claude Code was at launch. It can pay parking tickets, cancel subscriptions, manage project spreadsheets, message team members on Slack, respond to emails, and handle forms — and it’s growing faster than Claude Code did in its early days.

    11. Three Layers of AI Safety at Anthropic

    Cherny outlined three layers of safety: (1) Mechanistic interpretability — monitoring neurons inside the model to understand what it’s doing and detect things like deception at the neural level. (2) Evals — lab testing where the model is placed in synthetic situations to check alignment. (3) Real-world monitoring — releasing products as research previews to study unpredictable agent behavior in the wild. Claude Code was used internally for 4-5 months before public release specifically for safety study.

    12. Why Boris Left Anthropic for Cursor (and Came Back After Two Weeks)

    Cherny briefly left Anthropic to join Cursor, drawn by their focus on product quality. But within two weeks, he realized what he was missing: Anthropic’s safety mission. He described it as a psychological need — without mission-driven work, even building a great product wasn’t a substitute. He returned to Anthropic and the rest is history.

    13. Manual Coding Skills Will Become Irrelevant in 1-2 Years

    Cherny compared manual coding to assembly language — it’ll still exist beneath the surface, and understanding the fundamentals helps for now, but within a year or two it won’t matter for most engineers. He likened it to the printing press transition: a skill once limited to scribes became universal literacy over time. The volume of code created will explode while the cost drops dramatically.

    14. Pro Tips for Using Claude Code Effectively

    Cherny shared three specific tips: (1) Use the most capable model — currently Opus 4.6 with maximum effort enabled. Cheaper models often cost more tokens in the end because they require more correction and handholding. (2) Use Plan Mode — hit Shift+Tab twice in the terminal to enter plan mode, which tells the model not to write code yet. Go back and forth on the plan, then auto-accept edits once it looks good. Opus 4.6 will one-shot it correctly almost every time. (3) Explore different interfaces — Claude Code runs on terminal, desktop app, iOS, Android, web, Slack, GitHub, and IDE extensions. The same agent runs everywhere. Find what works for you.


    Detailed Summary

    The Origin Story of Claude Code

    Claude Code began as a one-person hack. When Cherny joined Anthropic, he spent a month building weird prototypes that mostly never shipped, then spent another month doing post-training to understand the research side. He believes deeply that to build great products on AI, you have to understand “the layer under the layer” — meaning the model itself.

    The first version was terminal-based and called “Claude CLI.” When he demoed it internally, it got two likes. Nobody thought a coding tool could be terminal-based. But the terminal form factor was chosen partly out of necessity (he was a solo developer) and partly because it was the only interface that could keep up with how fast the underlying model was improving.

    The breakthrough moment during prototyping: Cherny gave the model a bash tool and asked it what music he was listening to. The model figured out — without any specific instructions — how to use the bash tool to answer that question. That moment of emergent tool use convinced him he was onto something.

    The Growth Trajectory

    Claude Code was released externally in February 2025 and was not immediately a hit. It took months for people to understand what it was. The terminal interface was alien to many. But internally at Anthropic, daily active users went vertical almost immediately.

    There were multiple inflection points. The first major one was the release of Opus 4, which was Anthropic’s first ASL-3 class model. That’s when Claude Code’s growth went truly exponential. Another inflection came in November 2025 when Cherny personally crossed the 100% AI-written code threshold. The growth has continued to accelerate — it’s not just going up, it’s going up faster and faster.

    The Spotify headline from the week of recording — “Spotify says its best developers haven’t written a line of code since December, thanks to AI” — underscored how mainstream the shift has become.

    Thinking in Exponentials

    Cherny emphasized that thinking in exponentials is deep in Anthropic’s DNA — three of their co-founders were the first three authors on the scaling laws paper. At Code with Claude (Anthropic’s developer conference) in May 2025, Cherny predicted that by year’s end, engineers might not need an IDE to code anymore. The room audibly gasped. But all he did was “trace the line” of the exponential curve of AI-written code.

    The Printing Press Analogy

    Cherny’s preferred historical analog for what’s happening is the printing press. In mid-1400s Europe, literacy was below 1%. A tiny class of scribes did all the reading and writing, employed by lords and kings who often couldn’t read themselves. After Gutenberg, more printed material was created in 50 years than in the previous thousand. Costs dropped 100x. Literacy rose to 70% globally over two centuries.

    Cherny sees coding undergoing the same transition: a skill locked away in a tiny class of “scribes” (software engineers) is becoming accessible to everyone. What that unlocks is as unpredictable as the Renaissance was to someone in the 1400s. He also shared a remarkable historical detail — an interview with a scribe from the 1400s who was actually excited about the printing press because it freed them from copying books to focus on the artistic parts: illustration and bookbinding. Cherny felt a direct parallel to his own experience of being freed from coding tedium to focus on the creative and strategic parts of building.

    What AI Transforms Next

    Cherny believes roles adjacent to engineering — product management, design, data science — will be transformed next. The key technology enabling this is true agentic AI: not chatbots, but AI that can actually use tools and act in the world. Cowork is the first step in bringing this to non-technical users.

    He was candid that this transition will be “very disruptive and painful for a lot of people” and that it’s a conversation society needs to have. Anthropic has hired economists, policy experts, and social impact specialists to help think through these implications.

    The Latent Demand Framework in Depth

    Cherny credited Fiona Fung, the founding manager of Facebook Marketplace, for popularizing the concept of latent demand. The examples are compelling: someone using Claude Code to grow tomato plants, another analyzing their genome, another recovering wedding photos from a corrupted hard drive, a data scientist who figured out how to install Node.js and use a terminal to run SQL analysis through Claude Code.

    But Cherny added a new dimension specific to AI products: latent demand from the model itself. Rather than boxing the model into a predetermined workflow, observe what the model is trying to do and build to support that. At Anthropic they call this being “on distribution.” Give the model tools and goals, then let it figure out the path. The product is the model — everything else is minimal scaffolding.

    Safety as a Core Differentiator

    The interview made clear that safety isn’t just a talking point at Anthropic — it’s why everyone is there, including Cherny. He described the work of Chris Olah on mechanistic interpretability: studying model neurons at a granular level to understand how concepts are encoded, how planning works, and how to detect things like deception. A single neuron might correspond to a dozen concepts through a phenomenon called superposition.

    Anthropic’s “race to the top” philosophy means open-sourcing safety tools even when they work for competing products. They released an open-source sandbox for running AI agents securely that works with any agent, not just Claude Code.

    The Memory Leak Story

    One of the most memorable anecdotes: Cherny was debugging a memory leak the traditional way — taking heap snapshots, using debuggers, analyzing traces. A newer engineer on the team simply told Claude Code: “Hey Claude, it seems like there’s a leak. Can you figure it out?” Claude Code took the heap snapshot, wrote itself a custom analysis tool on the fly, found the issue, and submitted a pull request — all faster than Cherny could do it manually. Even veterans of AI-assisted coding get stuck in old habits.

    Personal Background and Post-AGI Plans

    In a touching segment, Cherny and Rachitsky discovered they’re both from Odessa, Ukraine. Cherny’s grandfather was one of the first programmers in the Soviet Union, working with punch cards. Before joining Anthropic, Cherny lived in rural Japan where he learned to make miso — a process that takes months to years and taught him to think on long timescales. His post-AGI plan? Go back to making miso.

    His book recommendations: Functional Programming in Scala (the best technical book he’s ever read), Accelerando by Charles Stross (captures the essence of this moment better than anything), and The Wandering Earth by Liu Cixin (Chinese sci-fi short stories from the Three Body Problem author).


    Thoughts and Analysis

    This interview is one of the most important conversations about the future of software engineering to come out in 2026. Here are some things worth sitting with:

    The “solved” framing is provocative but precise. Cherny isn’t saying software engineering is solved — he’s saying the act of translating intent into working code is solved. The thinking, architecting, deciding-what-to-build, and ensuring-it’s-correct parts are very much unsolved. This distinction matters enormously and most of the pushback in the YouTube comments misses it.

    The underfunding principle is genuinely counterintuitive. Most organizations respond to AI tools by trying to maintain headcount and “augment” existing workflows. Cherny’s approach is the opposite: reduce headcount on a project, give people unlimited AI tokens, and watch them figure out how to ship ten times faster. This is a fundamentally different organizational philosophy and one that most companies will resist until their competitors prove it works.

    The “build for six months from now” advice is dangerous and brilliant. Dangerous because your product will underperform for months and investors will get nervous. Brilliant because when the next model drops, you’ll have the only product that takes full advantage of it. This is how Claude Code went from writing 20% of Cherny’s code to 100% — the product was ready when the model caught up.

    The latent demand framework deserves serious study. The traditional version (watching users hack your product) is well-known from the Facebook era. The AI-native version (watching what the model is trying to do) is genuinely new. “The product is the model” is a deceptively simple statement that most AI product builders are still getting wrong by over-engineering workflows and scaffolding.

    The Cowork trajectory matters more than Claude Code. Claude Code transforms engineers. Cowork transforms everyone else. If Cowork delivers on even half of what Cherny describes — paying tickets, managing project spreadsheets, responding to emails, canceling subscriptions — then the total addressable market dwarfs coding tools. The fact that it was built in 10 days and was an immediate hit suggests Anthropic has found product-market fit for agentic AI beyond engineering.

    The safety discussion felt genuine. Cherny’s explanation of mechanistic interpretability — actually being able to monitor model neurons and detect deception — is one of the clearest public explanations of Anthropic’s safety approach. The fact that the safety mission is what brought him back from Cursor (where he lasted only two weeks) speaks to the culture. Whether you think safety is a genuine concern or a competitive moat, it’s clearly a core part of how Anthropic attracts and retains talent.

    The elephant in the room: this is Anthropic’s head of product telling you to use more tokens. Multiple YouTube commenters pointed this out, and they’re right to flag it. But the underlying logic holds: if a less capable model requires more correction rounds and more tokens to achieve the same result, then the “cheaper” model isn’t actually cheaper. That’s a testable claim, and most engineers using these tools regularly will tell you it checks out.

    Whether you agree with the “coding is solved” framing or not, the data is hard to argue with. Four percent of all GitHub commits. Two hundred percent productivity gains per engineer. A product that was built in 10 days and scaled to millions of users. These aren’t predictions — they’re measurements. And the curve is still accelerating.


    This article is based on Boris Cherny’s appearance on Lenny’s Podcast, published February 19, 2026. Boris Cherny can be found on X/Twitter and at borischerny.com.

  • Naval Ravikant on AI: Vibe Coding, Extreme Agency, and the End of Average

    TL;DW

    Artificial intelligence is fundamentally shifting how we interact with technology, moving programming from arcane syntax to plain English. This has given rise to “vibe coding,” where anyone with clear logic and taste can build software. While AI will eliminate the demand for average products and hollow out middle-tier software firms, it simultaneously empowers entrepreneurs and creators to build hyper-niche solutions. AI is not a job-stealer for those with “extreme agency”—it is the ultimate ally and a tireless, personalized tutor. The best way to overcome the growing anxiety surrounding AI is simply to dive in, look under the hood, and start building.

    Key Takeaways

    • Vibe coding is the new product management: You no longer manage engineers; you manage an egoless, tireless AI using plain English to build end-to-end applications.
    • Training models is the new programming: The frontier of computer science has shifted from formal logic coding to tuning massive datasets and models.
    • Traditional software engineering is not dead: Engineers who understand computer architecture and “leaky abstractions” are now the most leveraged people on earth.
    • There is no demand for average: The AI economy is a winner-takes-all market. The best app will dominate, while millions of hyper-niche apps will fill the long tail.
    • Entrepreneurs have nothing to fear: Because entrepreneurs exercise self-directed, extreme agency to solve unknown problems, AI acts as a springboard, not a replacement.
    • AI fails the true test of intelligence: Intelligence is getting what you want out of life. Because AI lacks biological desires, survival instincts, and agency, it is not “alive.”
    • AI is the ultimate autodidact tool: It can meet you at your exact level of comprehension, eliminating the friction of learning complex concepts.
    • Action cures anxiety: The antidote to AI fear is curiosity. Understanding how the technology works demystifies it and reveals its practical utility.

    Detailed Summary

    The Rise of Vibe Coding

    The paradigm of programming has experienced a massive leap. With tools like Claude Code, English has become the hottest new programming language. This enables “vibe coding”—a process where non-technical product managers, creatives, and former coders can spin up complete, working applications simply by describing what they want. You can iterate, debug, and refine through conversation. Because AI is adapting to human communication faster than humans are adapting to AI, there is no need to learn esoteric prompt engineering tricks. Simply speaking clearly and logically is enough to direct the machine.

    The Death of Average and the Extreme App Store

    As the barrier to creating software drops to zero, a tsunami of new applications will flood the market. In this environment of infinite supply, there is absolutely zero demand for average. The market will bifurcate entirely. At the very top, massive aggregators and the absolute best-in-class apps will consolidate power and encompass more use cases. At the bottom, a massive long tail of hyper-specific, niche apps will flourish—apps designed for a single user’s highly specific workflow or hobby. The casualty of this shift will be the medium-sized, 10-to-20-person software firms that currently build average enterprise tools, as their work can now be vibe-coded away.

    Why Traditional Software Engineers Still Have the Edge

    Despite the democratization of coding, traditional software engineering remains critical. AI operates on abstractions, and all abstractions eventually leak. When an AI writes suboptimal architecture or creates a complex bug, the engineer who understands the underlying code, hardware, and logic gates can step in to fix it. Furthermore, traditional engineers are required for high-performance computing, novel hardware architectures, and solving problems that fall outside of an AI’s existing training data distribution. Today, a skilled software engineer armed with AI tools is effectively 10x to 100x more productive.

    Entrepreneurs and Extreme Agency

    A common fear is that AI will replace jobs, but no true entrepreneur is worried about AI taking their role. An entrepreneur’s function is the antithesis of a standard job; they operate in unknown domains with “extreme agency” to bring something entirely new into the world. AI lacks its own desires, creativity, and self-directed goals. It cannot be an entrepreneur. Instead, it serves as a tireless ally to those who possess agency, acting as a springboard that allows creators, scientists, and founders to jump to unprecedented heights.

    Is AI Alive? The Philosophy of Intelligence

    The conversation around Artificial General Intelligence (AGI) often strays into whether the machine is “alive.” AI is currently an incredible imitation engine and a masterful data compressor, but it is not alive. It is not embodied in the physical world, it lacks a survival instinct, and it has no biological drive to replicate. Furthermore, if the true test of intelligence is the ability to navigate the world to get what you want out of life, AI fails instantly. It wants nothing. Any goal an AI pursues is simply a proxy for the desires of the human turning the crank.

    The Ultimate Tutor

    One of the most profound immediate use cases for AI is in education. AI is a patient, egoless tutor that can explain complex concepts—from quantum physics to ordinal numbers—at the exact level of the user’s comprehension. By generating diagrams, analogies, and step-by-step breakdowns, AI removes the friction of traditional textbooks. As Naval notes, the means of learning have always been abundant, but AI finally makes those means perfectly tailored to the individual. The only scarce resource left is the desire to learn.

    Action Cures Anxiety

    With the rapid advancement of foundational models, “AI anxiety” has become common. People fear what they do not understand, worrying about a dystopian Skynet scenario or abrupt obsolescence. The solution to this non-specific fear is action. By actively engaging with AI—popping the hood, asking questions, and testing its limitations—users can quickly demystify the technology. Early adopters who lean into their curiosity will discover what the machine can and cannot do, granting them a massive competitive edge in the intelligence age.

    Thoughts

    This discussion highlights a critical pivot in how we value human capital. For decades, technical execution was the bottleneck to innovation. If you had an idea, you had to either learn complex syntax to build it yourself or raise capital to hire a team. AI is completely removing the execution bottleneck. When execution becomes commoditized, the premium shifts entirely to taste, judgment, extreme agency, and logical thinking. We are entering an era where anyone can be a “spellcaster.” The winners in this new economy won’t necessarily be the ones who can write the best functions, but rather the ones who can ask the best questions and hold the most uncompromising vision for what they want to see exist in the world.

  • OpenAI Hires OpenClaw Creator Peter Steinberger: A Major Shift in the AI Agent Race

    OpenAI Hires OpenClaw Creator Peter Steinberger

    In a move that underscores the intensifying race to dominate AI agent technology, OpenAI has brought aboard Peter Steinberger, the visionary Austrian developer behind the viral open-source project OpenClaw. As reported by Reuters, Fortune, and TechCrunch, the deal was announced on February 15, 2026. This isn’t a conventional acquisition but an “acquihire,” where Steinberger joins OpenAI to spearhead the development of next-generation personal AI agents.

    Meanwhile, OpenClaw transitions to an independent foundation, remaining fully open-source with continued support from OpenAI (confirmed via Steinberger’s Blog and LinkedIn). This strategic alignment comes amid soaring interest in AI agents, a market projected by AInvest to hit $52.6 billion by 2030 with a 46.3% compound annual growth rate.

    The announcement, made via a post on X by OpenAI CEO Sam Altman around 21:39 GMT, arrived just hours before widespread media coverage from outlets like Fortune. Steinberger swiftly confirmed the news in a personal blog post, emphasizing his excitement for the future while reaffirming OpenClaw’s independence.

    The Rise of OpenClaw: From Playground Project to Phenomenon

    OpenClaw, originally launched as Clawdbot in November 2025—a playful nod to Anthropic’s Claude model—quickly evolved into a powerhouse open-source AI agent framework designed for personal use (Fortune, Steinberger’s Blog, APIYI). Steinberger, who “vibe coded” the project solo after a three-year hiatus following the sale of his previous company for over $100 million, saw it explode in popularity. It amassed over 100,000 GitHub stars, drew 2 million visitors in a week, and became the fastest-growing repo in GitHub history—surpassing milestones of projects like React and Linux (Yahoo Finance, LinkedIn).

    A trademark dispute with Anthropic prompted renames: first to Moltbot (evoking metamorphosis), then to OpenClaw in early 2026. The framework empowers AI to autonomously handle tasks on users’ devices, fostering a community focused on data ownership and multi-model support.

    Key capabilities that fueled its hype include:

    • Managing emails and inboxes.
    • Booking flights, restaurant reservations, and flight check-ins.
    • Interacting with services like insurers.
    • Integrating with apps such as WhatsApp and Slack for task delegation.
    • Creating a “social network” for AI agents via features like Moltbook, which spawned 1.6 million agents (Source).

    Despite its success, sustainability proved challenging. Steinberger personally shouldered infrastructure costs of $10,000 to $20,000 monthly, routing sponsorships to dependencies rather than himself, even as donations and corporate support (including from OpenAI) trickled in.

    The Path to the Deal: Billion-Dollar Bids and Open-Source Principles

    Prior to the announcement, Steinberger fielded billion-dollar acquisition offers from tech giants Meta and OpenAI (Yahoo Finance). Meta’s Mark Zuckerberg personally messaged Steinberger on WhatsApp, sparking a 10-minute debate over AI models, while OpenAI’s Sam Altman offered computational resources via a Cerebras partnership to boost agent performance. Meta aggressively pursued Steinberger and his team, but OpenAI advanced in talks to hire him and key contributors.

    Steinberger spent the preceding week in San Francisco meeting AI labs, accessing unreleased research. He insisted any deal preserve OpenClaw’s open-source nature, likening it to Chrome and Chromium. Ultimately, OpenAI’s vision aligned best with his goal of accessible agents.

    Key Announcements and Voices from the Frontlines

    Sam Altman, in his X post on February 15, 2026, hailed Steinberger as a “genius with a lot of amazing ideas about the future of very smart agents interacting with each other to do very useful things for people.” He added, “We expect this will quickly become core to our product offerings. OpenClaw will live in a foundation as an open source project that OpenAI will continue to support. The future is going to be extremely multi-agent and it’s important to us to support open source as part of that.”

    Steinberger’s blog post echoed this enthusiasm: “tl;dr: I’m joining OpenAI to work on bringing agents to everyone. OpenClaw will move to a foundation and stay open and independent. The last month was a whirlwind… When I started exploring AI, my goal was to have fun and inspire people… My next mission is to build an agent that even my mum can use… I’m a builder at heart… What I want is to change the world, not build a large company… The claw is the law.”

    Strategic Implications: Opportunities and Challenges Ahead

    For OpenAI, this bolsters their AI agent push, potentially accelerating consumer-grade solutions and addressing barriers like setup complexity and security. It positions them in the “personal agent race” against Meta, emphasizing multi-agent systems. The broader AI agents market could reach $180 billion by 2033, driving undisclosed but likely substantial financial terms.

    OpenClaw benefits from foundation status (akin to the Linux Foundation), ensuring independence and community focus with OpenAI’s sponsorship.

    However, risks loom large. OpenClaw’s “unfettered access” to devices raises security concerns, including data breaches and rogue actions—like one incident of spamming hundreds of iMessages. China’s industry ministry warned of cyberattack vulnerabilities if misconfigured. Steinberger aims to prioritize safety and accessibility.

    Community Pulse: Excitement, Skepticism, and Satire

    Reactions on X blend hype and caution. Cointelegraph noted the move as a “big move” for ecosystems. One user called it the “birth of the agent era,” while another satirically predicted a shift to “ClosedClaw.” Fears of closure persist, but congratulations abound, with some viewing Anthropic’s trademark push as a “fumble.”

    LinkedIn’s Reyhan Merekar praised Steinberger’s solo feat: “Literally coding alone at odd hours… Faster than React, Linux, and Kubernetes combined.”

    Beyond the Headlines: Vision and Value

    Steinberger’s core vision: Agents for all, even non-tech users, with emphasis on safety, cutting-edge models, and impact over empire-building. OpenClaw’s strengths—model-agnostic design, delegation-focused UX, and persistent memory—eluded even well-funded labs.

    As of February 15, 2026, this marks a pivotal moment in AI’s evolution, blending open innovation with corporate muscle. No further updates have emerged, but the multi-agent future Altman envisions is accelerating.

  • The Buzz of the Dolomites: How FPV Drones Became the Breakout Star of the 2026 Winter Olympics

    The Buzz of the Dolomites: How FPV Drones Became the Breakout Star of the 2026 Winter Olympics

    Inside the 243-gram flying cameras chasing Olympic gold at 90 mph — the pilots, the tech, the controversy, and why Milano Cortina 2026 will be remembered as the “Drone Games.”


    TLDR

    The Milano Cortina 2026 Winter Olympics have deployed FPV (First-Person View) drones as a core broadcast tool for the first time in Winter Games history. A fleet of 25 custom-built drones — weighing just 243 grams each and capable of 90 mph — are chasing bobsleds through ice canyons, diving off ski jumps alongside athletes, and orbiting snowboarders mid-trick. Built by the Dutch firm “Dutch Drone Gods” and operated by former athletes turned drone pilots, the system uses a dual-feed transmission architecture that sends ultra-low-latency video to the pilot’s goggles while simultaneously beaming broadcast-quality HD to the production truck. The result is footage that makes viewers feel like they’re sitting on the athlete’s shoulder. But the revolution comes with a buzzkill — literally. The drones’ high-pitched whine has sparked a global “angry mosquito” debate, and Italian defense contractor Leonardo has erected an invisible “Electronic Dome” of radar and jamming systems over the Dolomites to keep unauthorized drones out. Love it or hate it, FPV has graduated from experiment to Olympic standard, and the 2030 French Alps Games will inherit everything Milano Cortina pioneered.


    Key Takeaways

    • First-ever structural FPV integration at a Winter Olympics. These aren’t novelty shots — FPV is the default angle for replays and key live segments in speed disciplines at Milano Cortina 2026.
    • 25 custom drones, 15 dedicated FPV teams. The fleet is built by Dutch Drone Gods for Olympic Broadcasting Services (OBS), each unit weighing just 243 grams with top speeds of 140 kph.
    • Dual-feed transmission solves the latency problem. Pilots see 15-40ms ultra-low-latency video through their goggles while a separate HD broadcast feed with 300-400ms delay goes to the production truck via COFDM signal.
    • Pilots are former athletes. Ex-Norwegian ski jumper Jonas Sandell flies the ski jumping coverage. He anticipates the “lift” because he’s done it himself thousands of times.
    • Three-person teams modeled on military aviation. Every flight requires a pilot (goggles on, zero outside awareness), a spotter (line-of-sight, abort authority), and a director (in the OB truck, calling the live cut).
    • The inverted propeller design is the secret weapon. Mounting motors upside-down lowers the center of gravity and lets the drone “carve” air like a skier carves snow — smoother banking, cleaner footage.
    • Battery life is 5 minutes in sub-zero conditions. Heated cabins keep LiPo packs at body temperature until seconds before flight. Cold batteries can voltage-sag and drop a drone mid-chase.
    • Leonardo’s “Electronic Dome” protects the airspace. Tactical radar, RF sniffing, and electronic jamming distinguish sanctioned drones from rogue threats. Unauthorized flight is a criminal offense.
    • The “angry mosquito” controversy is real. Props spinning at 30,000+ RPM emit a 400Hz-8kHz whine that cuts through the natural soundtrack of winter sports. AI audio scrubbing is in development for 2030.
    • 93% viewership spike. 26.5 million viewers in the first five days — and FPV footage is being credited as a major factor.

    The Full Story

    As the 2026 Winter Olympic Games in Milano Cortina reach their halfway point, a singular technological narrative has emerged that eclipses even the introduction of ski mountaineering or the unprecedented decentralized venue structure spanning 400 kilometers of northern Italy. It’s not a new sport. It’s a new way of seeing sport.

    For the first time in Winter Olympics history, First-Person View drones have been deployed not as experimental novelties bolted onto the margins of production, but as the primary architectural component of the live broadcast for speed disciplines. From the icy chutes of the Cortina Sliding Centre to the vertical drops of the Stelvio Ski Centre in Bormio, a fleet of custom-engineered, high-speed micro-drones is fundamentally altering the viewer’s relationship with gravity, velocity, and fear.

    No longer tethered to fixed cable cams or zoomed-in telephoto lenses that compress depth and flatten the terror of a 90 mph descent, audiences are now riding shotgun. They’re sitting on the shoulder of a downhill skier as she threads a 2-meter gap between dolomite rock walls. They’re matching a four-man bobsled through a concrete-and-ice canyon where the walls blur into a warp-speed tunnel. They’re floating parallel to a ski jumper at the apex of a 140-meter flight, looking down at the terrifying void between athlete and earth.

    This is FPV at the Olympics. And it changes everything.

    The Hardware: 243 Grams of Purpose-Built Fury

    The drones chasing Olympic gold are nothing like the DJI Mavic sitting in your closet. They are bespoke, purpose-built broadcast machines designed to survive a hostile alpine environment while delivering cinema-grade imagery at insane speeds. The fleet comprises approximately 25 active units with 15 dedicated FPV teams, and the hardware was developed by the Netherlands-based firm Dutch Drone Gods (DDG) in partnership with Olympic Broadcasting Services.

    The engineering brief was a paradox: build something fast enough to chase a bobsled at 140 kph, yet light enough that if it ever made contact with an athlete, the damage would be survivable. The answer weighs 243 grams — just under the critical 250-gram threshold that triggers stricter aviation classification in most jurisdictions.

    Core Specs at a Glance

    FeatureSpecificationWhy It Matters
    Weight243 gramsSub-250g classification bypasses stricter aviation rules; minimizes impact energy
    Max Speed100+ kph (bursts to 140 kph / 90 mph)Matches bobsled and downhill skiing velocities
    Flight Time~5 minutes (two athlete runs)Cold degrades batteries fast; hot-swap protocol keeps packs warm until launch
    Frame DesignInverted propeller “Cinewhoop” (2.5″ to 7″)Lowered center of gravity; cleaner air over props for smoother banking
    Operating Temp-20°C to +5°CLiPo batteries pre-heated in thermal warmers to prevent voltage sag
    Pilot FeedDJI O4 Air Unit, 15-40ms latencyReflex-speed video to goggles — the pilot’s “nervous system”
    Broadcast FeedProton CAM Full HD Mini + Domo Pico Tx, 300-400ms latencyHD HDR signal via COFDM to production truck — the “visual cortex”

    The Inverted Propeller Innovation

    The single most important hardware decision DDG made was mounting the motors upside down. In a traditional drone, propellers sit above the arms and push air downward over the frame, creating turbulence. The Olympic drones flip this — motors are mounted below the arms in a “pusher” configuration.

    The physics payoff is significant. When chasing a skier through a Super-G turn, the drone must bank aggressively — sometimes 60-70 degrees. The inverted design lowers the center of gravity, allowing the drone to “carve” through the air the way a ski carves through snow. The result is footage with smooth, sweeping curves that mirror the athlete’s line rather than fighting it. And because the propellers push air away from the frame rather than washing it over the body, there’s less self-induced turbulence — critical when you’re flying centimeters from ice inside a bobsled track.

    The Dual-Feed Architecture: Two Brains, One Drone

    Here’s the fundamental problem with live FPV broadcast: a pilot flying at 90 mph needs to see what the drone sees instantly. Even a half-second delay and you’ve already crashed. But broadcast television needs high-definition, color-corrected, HDR imagery — processing that inherently introduces latency.

    The solution is elegant: each drone carries two independent transmission systems.

    The pilot feed runs through a DJI O4 Air Unit at 15-40 milliseconds of latency. It’s lower resolution, optimized purely for frame rate and response time. This is the drone’s “nervous system” — raw, twitchy, and fast. Only the pilot sees it.

    The broadcast feed uses a completely separate camera (Proton CAM Full HD Mini) and transmitter (Domo Pico Tx), running at 300-400ms latency via COFDM signal — a modulation scheme specifically chosen because it’s robust against the multipath interference caused by radio signals bouncing off dolomite rock walls and concrete sliding tracks. This feed goes straight to the Outside Broadcast van where it’s color-graded and cut into the world feed alongside 800 other cameras.

    The result: the pilot flies on instinct while the world watches in HD. Two realities, one airframe.

    The Human Element: Athletes Flying Athletes

    The most fascinating aspect of the 2026 FPV program isn’t the hardware — it’s the hiring strategy. OBS and its broadcast partners realized early on that following a ski jumper off a 140-meter hill requires more than stick skills. It requires understanding what the athlete’s body is about to do before it does it.

    So they recruited athletes.

    Jonas Sandell is a former member of the Norwegian national ski jumping team. He now flies FPV for OBS at the Predazzo Ski Jumping Stadium. His athletic background gives him something no amount of simulator time can replicate: a proprioceptive understanding of when a jumper will “pop” off the table and transition from running to flying. He anticipates the lift phase — throttling up the drone milliseconds before the visual cue — because his own body remembers the feeling. He knows the flight envelope of a ski jumper because he used to be the flight envelope.

    For the sliding sports — luge, skeleton, bobsled — the pilot known as “ShaggyFPV” from Dutch Drone Gods leads what might be the most dangerous camera crew at the Games. Flying inside the bobsled track is essentially flying inside a concrete pipe with no GPS, no stabilization assists, and a 1,500-kilogram sled bearing down at 140 kph. ShaggyFPV and his team fly up to 50 runs per session, building muscle memory of every curve and transition so deeply that the flying becomes subconscious. If a sled crashes and rides up the walls, the pilot must have a faster-than-conscious “bail out” reflex — throttle up and out of the track instantly to avoid becoming a 243-gram projectile aimed at a downed athlete.

    The Three-Person Team Protocol

    No FPV drone flies alone at the Olympics. Every unit operates under a strict three-person crew structure modeled on military aviation:

    1. The Pilot — goggles on, immersed in the FPV feed, zero awareness of the physical world. They fly on reflex and audio cues.
    2. The Spotter/Technician — maintains visual line-of-sight with the drone at all times. Monitors signal strength, battery voltage, wind, and physical hazards. Has unilateral “tap on the shoulder” authority to abort any flight, no questions asked.
    3. The Director — sits in the warmth of the OB truck, watching the drone feed alongside 20+ other camera angles. Calls the shot: “Drone 1, stand by… and TAKE.” Coordinates the cut so the drone enters the broadcast mix at exactly the right moment.

    This three-person ballet is performed hundreds of times a day across all venues. It’s the invisible choreography that makes the “wow” moments look effortless.

    The Visual Philosophy: “Movement in Sport”

    Mark Wallace, OBS Chief Content Officer, defined the visual strategy for 2026 with a two-word mandate: “Movement in Sport.” The goal isn’t just to show what happened. It’s to make the viewer feel what happened.

    In alpine skiing, the drone doesn’t just follow — it mimics. When the skier tucks, the drone drops altitude. When the skier carves, the drone banks. The camera becomes a kinesthetic mirror, conveying the violence of the vibration and the crushing G-forces in a way that a static telephoto shot from the sideline never could.

    In ski jumping, the drone tracks parallel to the athlete mid-flight, revealing the true scale of a 140-meter jump — the terrifying height, the impossible hang time, the narrow margin between textbook landing and catastrophe. Tower cameras flatten this. FPV restores it.

    In the sliding sports, the FPV drone may be the only camera capable of honestly conveying speed. Fixed trackside cameras pan so fast the sled blurs into abstraction. But the drone matches velocity, keeping the sled in razor-sharp focus while the ice walls dissolve into a warp-speed tunnel around it. For the first time, viewers at home can viscerally understand why bobsled pilots describe their sport as “controlled falling.”

    And in snowboard and freestyle at Livigno, the pilots have creative license to orbit athletes mid-trick, creating real-time “Bullet Time” effects that would have required a Hollywood rig and months of post-production just a decade ago.

    Venue by Venue: Where FPV Shines (and Struggles)

    Milano Cortina 2026 is the most geographically dispersed Olympics in history, with venues stretching across hundreds of kilometers of northern Italy. Each location presents unique challenges that force the FPV teams to adapt their hardware, techniques, and risk calculus.

    Bormio — The Vertical Wall

    The Stelvio Ski Centre hosts men’s alpine skiing on one of the steepest, iciest, most terrifying courses in the world. The north-facing slope sits in perpetual shadow. Pilots switch to heavier 7-inch drone configurations here to fight the brutal updrafts on the exposed upper mountain. The “San Pietro” jump — one of the Stelvio’s signature features — requires the drone to dive with the skier off a cliff at 140 kph, judging the athlete’s speed with centimeter-level precision. Too slow and the skier vanishes. Too fast and the shot is ruined.

    Cortina d’Ampezzo — The Amphitheater

    At the Olympia delle Tofane, women’s alpine skiing threads through massive dolomite rock formations. The challenge here is dual: RF multipath (radio signals bouncing off rock walls threaten to break up the video feed) and extreme light contrast (bright sun to deep rock shadow in seconds). The COFDM transmission system earns its keep here, and technicians in the truck ride the iris and ISO controls like a musician riding a fader.

    The Cortina Sliding Centre is the most technically demanding FPV environment at the Games. A concrete and ice canyon with no GPS signal. Pilots fly purely on muscle memory in Acro mode — no stabilization, no computer assistance, just stick and reflex. Every flight carries an abort plan because if a sled crashes, the drone needs to exit the track faster than human thought.

    Livigno — The Playground

    The open terrain of the Livigno Snow Park is where FPV gets to play. In Big Air, drones orbit rotating athletes. In Slopestyle, they chase riders across sequences of rails and jumps. When a rider checks speed to set up a trick, the drone “yaws” — turning sideways to increase drag and bleed speed instantly. It’s the most creatively expressive FPV work at the Games.

    Milan — The Indoor Frontier

    The most experimental deployment is indoors at the Mediolanum Forum for speed skating. Metal stadium beams create RF havoc, reflecting signals and causing video breakup. The solution: specialized RF repeaters and miniaturized 2.5-inch shrouded Cinewhoops safe to fly near crowds. The drones track skaters from inside the oval, revealing the tactical chess of team pursuit events in a way overhead cameras never could. Pilots fly in full manual mode with the compass disabled — the steel structure would send a magnetometer haywire.

    The Physics Problem: Flying Fast in Thin, Frozen Air

    Flying a 243-gram drone at 2,300 meters above sea level in -20°C is not the same as flying it in a parking lot in the Netherlands. The physics conspire against you at every level.

    Thin air. At the Bormio start elevation of 2,255 meters, air density is significantly lower than at sea level. Propellers generate lift by moving air, and when there’s less air to move, the props must spin faster. This draws more current, drains batteries faster, and makes the drone feel “looser” — less grip on the air, harder to hold tight lines. The DDG drones use high-pitch propellers and high-KV motors that bite aggressively into the thin atmosphere to compensate.

    Cold batteries. Lithium-polymer battery chemistry slows down as temperature drops. Internal resistance rises. When the pilot punches the throttle to chase a skier out of the start gate, the battery voltage can plummet — a phenomenon called “voltage sag” — potentially triggering a low-voltage cutoff that kills the drone mid-flight. The “Heated Cabin” protocol is not a comfort measure; it’s mission-critical. Batteries are stored at body temperature (~37°C) in thermal warmers until the final seconds before flight, and packs are swapped every two runs even if they’re not fully depleted.

    Blinding contrast. The visual environment of winter sports is an exposure nightmare: blinding white snow and ink-black shadows from rock formations. The Proton CAM was selected specifically for its HDR capability, resolving detail in both extremes simultaneously. But it’s not set-and-forget — technicians in the truck ride the exposure adjustments in real-time as the drone descends from sun to shadow and back.

    The Electronic Dome: Security in the Sky

    While OBS drones are the stars of the broadcast, they fly in one of the most securitized airspaces on the planet. The Alps present a defender’s nightmare: valleys provide radar shadows where a rogue drone can launch from a hidden floor, pop over a ridge, and be over a stadium in seconds.

    Italian defense giant Leonardo, appointed as Premium Partner for security and mission-critical communications, has erected a multi-layered Counter-UAS defense grid — an invisible “Electronic Dome” — over every venue.

    The system works in three phases:

    1. Detection. Tactical Multi-mission Radar (TMMR) — an AESA array optimized for “low, slow, and small” targets — scans the mountain clutter for anything that shouldn’t be there. Simultaneously, passive RF sensors listen for the telltale handshake signals between a remote controller and a drone.
    2. Classification. Once detected, the system must instantly determine friend or foe. OBS drones broadcast specific Remote ID signatures and operate on reserved, whitelisted frequencies coordinated with ENAC (the Italian Civil Aviation Authority). Anything detected outside the predefined 3D geofences is flagged as hostile.
    3. Mitigation. At an Olympic venue, you can’t shoot a drone down — falling debris over a crowd of thousands is not an option. Instead, Leonardo’s Falcon Shield technology performs a “soft kill,” flooding the rogue drone’s control frequencies (2.4GHz / 5.8GHz) with electronic noise. With its link severed, most consumer drones hover momentarily and then execute a Return-to-Home. Tactical teams on the ground carry handheld jamming rifles for close-range backup.

    ENAC has designated all Olympic venues as temporary “Red Zones” from February 6-22. Unauthorized drone flight in these zones isn’t a civil fine — it’s a criminal offense under the Games’ National Security designation. The US Diplomatic Security Service has gone so far as to warn American travelers that Italy will enforce strict bans and anticipates at least one “high profile drone incursion.”

    The Angry Mosquito: FPV’s Buzzing Controversy

    For all the visual brilliance, the FPV revolution has a PR problem — and it sounds like an angry insect trapped in your living room.

    Small propellers spinning at 30,000+ RPM generate a high-frequency whine in the 400Hz-8kHz range. This is precisely the frequency band where human hearing is most sensitive (we evolved to find high-pitched buzzing irritating — thanks, mosquitoes). The drone’s whine cuts through the natural soundtrack of winter sports: the roar of edges on ice, the whoosh of wind, the crunch of snow, the silence of flight. In some broadcast feeds, the drone noise overpowers everything else.

    Traditionalists argue the footage, while undeniably dynamic, can be disorienting — a “video game aesthetic” that detracts from the gravity of the Olympic moment. Others counter that the immersion is worth the acoustic cost.

    OBS CEO Yiannis Exarchos has publicly acknowledged the problem. Engineers are testing AI audio filters that can “fingerprint” the specific waveform of the DDG drone motors and subtract them from the live mix in real-time — essentially noise-canceling headphones for the broadcast. The technology isn’t fully deployed for every event in 2026, but OBS views it as a mandatory requirement for the 2030 French Alps Games.

    The Road Here: A Brief History of Olympic Drones

    Milano Cortina didn’t happen overnight. The path from aerial curiosity to broadcast infrastructure took 12 years and four Olympic cycles.

    • Sochi 2014: Drones debuted as flying tripods — slow, heavy multi-rotors capturing landscape “establishing shots” of the Caucasus Mountains. They couldn’t follow athletes and had unpredictable battery life in the Russian cold.
    • PyeongChang 2018: The 1,200-drone Intel Shooting Star light show at the Opening Ceremony was spectacular, but it was performance art, not sports coverage. Broadcast drones remained stuck on scenic B-roll.
    • Beijing 2022: COVID restrictions accelerated remote camera technology. Drones were used more aggressively in cross-country skiing and biathlon, but still as “high-eye” perspectives looking down. The latency barrier for close-proximity FPV hadn’t been cracked for broadcast-grade reliability.
    • Paris 2024: The breakthrough. OBS tested FPV in mountain biking and urban sports, proving the hybrid dual-feed transmission model worked in live production. The critical lesson: FPV pilots need to understand the sport, not just the stick. This directly shaped the athlete-recruitment strategy for 2026.
    • Milano Cortina 2026: FPV graduates from experiment to standard. It is no longer a “special feature” — it is the primary camera system for speed disciplines, treated with the same priority as a wired trackside camera on the main production switcher.

    By the Numbers

    25Active drone units across all venues
    15Dedicated FPV teams
    243gWeight of each drone (sub-250g class)
    140 kphMaximum burst speed (90 mph)
    5 minFlight time per battery in freezing conditions
    15-40msPilot feed latency (reflex-speed)
    300-400msBroadcast feed latency (HD quality)
    -20°CMinimum operating temperature
    2,300mHighest venue elevation (Tofane start)
    50 runsFlights per session for sliding sport pilots
    800Total cameras deployed across all Games coverage
    26.5MViewers in first five days (93% increase over Beijing 2022)
    12 monthsPreparation and training time per venue

    The Regulatory Stack: Why Your Drone Can’t Fly But Theirs Can

    One of the more interesting subtexts of the “Drone Games” is the dual reality playing out in Italian airspace: OBS drones are chasing bobsleds while everyone else is grounded.

    The regulatory framework operates in three layers:

    1. EU Drone Law (Commission Implementing Regulation 2019/947 and Delegated Regulation 2019/945) — defines Open, Specific, and Certified categories for all UAS operations across Europe.
    2. Italian National Implementation — ENAC and ENAV/D-Flight operationalize the rules. D-Flight provides the maps showing where you can and can’t fly, and ENAC can prohibit Open category operations inside designated UAS geographical zones.
    3. Olympic Security Overlay — temporary Red Zones and No-Fly Zones around all venue clusters from February 6-22, backed by criminal penalties under the National Security designation. These override everything else.

    OBS drones thread this needle through meticulous pre-coordination with ENAC, Italian police, venue prefectures, and the Leonardo security apparatus. Every flight path is pre-approved. Every drone broadcasts approved credentials. The “Electronic Dome” is calibrated to recognize them as friendly. A random tourist launching a Mavic? That’s a criminal act and an immediate trigger for the Counter-UAS response.

    Drone Racing: The Sport Waiting in the Wings

    There’s a fascinating meta-narrative playing out alongside the broadcast revolution: the sport of Drone Racing itself is inching toward Olympic recognition.

    Just months before the Winter Games, Drone Racing appeared as a medal event at the 2025 World Games in Chengdu. The talent overlap is striking — pilots like ShaggyFPV are already at the Olympics, just pointing their drones at athletes instead of racing gates. The FAI (World Air Sports Federation) continues to push for Olympic inclusion, and the merging of FPV broadcast culture with competitive drone culture suggests it’s a matter of when, not if.

    By the 2030s, the pilots filming the Olympics might also be competing in them.

    What Comes Next: The 2030 Legacy

    Everything pioneered at Milano Cortina — the inverted propeller design, the dual-feed transmission, the heated battery cabins, the athlete-pilot recruitment model, the three-person crew protocol — becomes the baseline standard for the 2030 Winter Games in the French Alps.

    But the technology won’t stand still. Expect further miniaturization, AI-assisted “follow-me” autonomy to reduce pilot workload, and — most critically — the perfection of real-time AI audio scrubbing to finally silence the angry mosquito without silencing the drone.

    OBS is also exploring athlete-worn microphones paired with FPV footage, which could let viewers hear the ragged breathing of a downhill skier while riding their shoulder at 90 mph. If that doesn’t make you grip your couch, nothing will.


    Thoughts

    The Milano Cortina 2026 FPV story is, at its core, a story about the collapse of distance between viewer and athlete. For decades, winter sports broadcasting has been fighting the same battle: how do you convey what it feels like to hurtle down a mountain at 90 mph to someone sitting on a couch? Telephoto lenses compress depth and kill the sense of speed. Cable cams are rigid and predictable. Helmet cams are shaky and disorienting.

    FPV cracks the problem by making the camera itself an athlete — one that flies alongside, banks with, dives with, and bleeds speed with the human it’s chasing. The footage isn’t just immersive; it’s educational. Watching an FPV shot of a downhill run, you suddenly understand why athletes describe certain sections as terrifying. You see the compression. You feel the violence of the turn. The sport makes sense in a way it never did from a static camera 200 meters away.

    The mosquito noise controversy is real but solvable — and frankly, it’s the kind of problem you want to have. It means the technology is close enough to the action to matter. AI audio scrubbing will handle it by 2030, and in the meantime, the visual revolution is worth a little buzzing.

    What’s most impressive is the human layer. The decision to hire former athletes as pilots is quietly brilliant. Jonas Sandell doesn’t just fly a drone alongside ski jumpers — he is a ski jumper who happens to be holding a transmitter instead of standing on skis. That intuitive understanding of sport physics is what separates “cool drone shot” from “footage that changes how you understand the sport.” It’s the difference between following and anticipating.

    The security dimension is equally fascinating. Leonardo’s “Electronic Dome” is essentially a small-scale military air defense system repurposed for consumer drone threats — a sign of how seriously modern event security takes the airspace layer. The fact that OBS drones need IFF-style credentialing (friend-or-foe identification, borrowed from fighter jet terminology) to avoid being jammed by their own side tells you everything about the complexity of operating sanctioned drones inside a security perimeter designed to destroy all drones.

    Looking ahead, the convergence of FPV broadcast and drone racing as a sport feels inevitable. When the pilots filming the Olympics have competition backgrounds, and the sport of drone racing is gaining World Games medals, the line between “camera operator” and “athlete” starts to blur. The FAI’s push for Olympic inclusion has never had better advertising than the footage coming out of Bormio and Cortina right now.

    Milano Cortina 2026 will be remembered as the Games where the camera stopped watching and started participating. The Buzz of Bormio may be annoying to some. But it’s the sound of sports broadcasting evolving — at 100 kilometers per hour, 243 grams at a time.

  • Dario Amodei on the AGI Exponential: Anthropic’s High-Stakes Financial Model and the Future of Intelligence

    TL;DW (Too Long; Didn’t Watch)

    Anthropic CEO Dario Amodei joined Dwarkesh Patel for a high-stakes deep dive into the endgame of the AI exponential. Amodei predicts that by 2026 or 2027, we will reach a “country of geniuses in a data center”—AI systems capable of Nobel Prize-level intellectual work across all digital domains. While technical scaling remains remarkably smooth, Amodei warns that the real-world friction of economic diffusion and the ruinous financial risks of $100 billion training clusters are now the primary bottlenecks to total global transformation.


    Key Takeaways

    • The Big Blob Hypothesis: Intelligence is an emergent property of scaling compute, data, and broad distribution; specific algorithmic “cleverness” is often just a temporary workaround for lack of scale.
    • AGI is a 2026-2027 Event: Amodei is 90% certain we reach genius-level AGI by 2035, with a strong “hunch” that the technical threshold for a “country of geniuses” arrives in the next 12-24 months.
    • Software Engineering is the First Domino: Within 6-12 months, models will likely perform end-to-end software engineering tasks, shifting human engineers from “writers” to “editors” and strategic directors.
    • The $100 Billion Gamble: AI labs are entering a “Cournot equilibrium” where massive capital requirements create a high barrier to entry. Being off by just one year in revenue growth projections can lead to company-wide bankruptcy.
    • Economic Diffusion Lag: Even after AGI-level capabilities exist in the lab, real-world adoption (curing diseases, legal integration) will take years due to regulatory “jamming” and organizational change management.

    Detailed Summary: Scaling, Risk, and the Post-Labor Economy

    The Three Laws of Scaling

    Amodei revisits his foundational “Big Blob of Compute” hypothesis, asserting that intelligence scales predictably when compute and data are scaled in proportion—a process he likens to a chemical reaction. He notes a shift from pure pre-training scaling to a new regime of Reinforcement Learning (RL) and Test-Time Scaling. These allow models to “think” longer at inference time, unlocking reasoning capabilities that pre-training alone could not achieve. Crucially, these new scaling laws appear just as smooth and predictable as the ones that preceded them.

    The “Country of Geniuses” and the End of Code

    A recurring theme is the imminent automation of software engineering. Amodei predicts that AI will soon handle end-to-end SWE tasks, including setting technical direction and managing environments. He argues that because AI can ingest a million-line codebase into its context window in seconds, it bypasses the months of “on-the-job” learning required by human engineers. This “country of geniuses” will operate at 10-100x human speed, potentially compressing a century of biological and technical progress into a single decade—a concept he calls the “Compressed 21st Century.”

    Financial Models and Ruinous Risk

    The economics of building the first AGI are terrifying. Anthropic’s revenue has scaled 10x annually (zero to $10 billion in three years), but labs are trapped in a cycle of spending every dollar on the next, larger cluster. Amodei explains that building a $100 billion data center requires a 2-year lead time; if demand growth slows from 10x to 5x during that window, the lab collapses. This financial pressure forces a “soft takeoff” where labs must remain profitable on current models to fund the next leap.

    Governance and the Authoritarian Threat

    Amodei expresses deep concern over “offense-dominant” AI, where a single misaligned model could cause catastrophic damage. He advocates for “AI Constitutions”—teaching models principles like “honesty” and “harm avoidance” rather than rigid rules—to allow for better generalization. Geopolitically, he supports aggressive chip export controls, arguing that democratic nations must hold the “stronger hand” during the inevitable post-AI world order negotiations to prevent a global “totalitarian nightmare.”


    Final Thoughts: The Intelligence Overhang

    The most chilling takeaway from this interview is the concept of the Intelligence Overhang: the gap between what AI can do in a lab and what the economy is prepared to absorb. Amodei suggests that while the “silicon geniuses” will arrive shortly, our institutions—the FDA, the legal system, and corporate procurement—are “jammed.” We are heading into a world of radical “biological freedom” and the potential cure for most diseases, yet we may be stuck in a decade-long regulatory bottleneck while the “country of geniuses” sits idle in their data centers. The winner of the next era won’t just be the lab with the most FLOPs, but the society that can most rapidly retool its institutions to survive its own technological adolescence.

    For more insights, visit Anthropic or check out the full transcript at Dwarkesh Patel’s Podcast.

  • OpenClaw & The Age of the Lobster: How Peter Steinberger Broken the Internet with Agentic AI

    In the history of open-source software, few projects have exploded with the velocity, chaos, and sheer “weirdness” of OpenClaw. What began as a one-hour prototype by a developer frustrated with existing AI tools has morphed into the fastest-growing repository in GitHub history, amassing over 180,000 stars in a matter of months.

    But OpenClaw isn’t just a tool; it is a cultural moment. It’s a story about “Space Lobsters,” trademark wars with billion-dollar labs, the death of traditional apps, and a fundamental shift in what it means to be a programmer. In a marathon conversation on the Lex Fridman Podcast, creator Peter Steinberger pulled back the curtain on the “Age of the Lobster.”

    Here is the definitive deep dive into the viral AI agent that is rewriting the rules of software.


    The TL;DW (Too Long; Didn’t Watch)

    • The “Magic” Moment: OpenClaw started as a simple WhatsApp-to-CLI bridge. It went viral when the agent—without being coded to do so—figured out how to process an audio file by inspecting headers, converting it with ffmpeg, and transcribing it via API, all autonomously.
    • Agentic Engineering > Vibe Coding: Steinberger rejects the term “vibe coding” as a slur. He practices “Agentic Engineering”—a method of empathizing with the AI, treating it like a junior developer who lacks context but has infinite potential.
    • The “Molt” Wars: The project survived a brutal trademark dispute with Anthropic (creators of Claude). During a forced rename to “MoltBot,” crypto scammers sniped Steinberger’s domains and usernames in seconds, serving malware to users. This led to a “Manhattan Project” style secret operation to rebrand as OpenClaw.
    • The End of the App Economy: Steinberger predicts 80% of apps will disappear. Why use a calendar app or a food delivery GUI when your agent can just “do it” via API or browser automation? Apps will devolve into “slow APIs”.
    • Self-Modifying Code: OpenClaw can rewrite its own source code to fix bugs or add features, a concept Steinberger calls “self-introspection.”

    The Origin: Prompting a Revolution into Existence

    The story of OpenClaw is one of frustration. In late 2025, Steinberger wanted a personal assistant that could actually do things—not just chat, but interact with his files, his calendar, and his life. When he realized the big AI labs weren’t building it fast enough, he decided to “prompt it into existence”.

    The One-Hour Prototype

    The first version was built in a single hour. It was a “thin line” connecting WhatsApp to a Command Line Interface (CLI) running on his machine.

    “I sent it a message, and a typing indicator appeared. I didn’t build that… I literally went, ‘How the f*** did he do that?’”

    The agent had received an audio file (an opus file with no extension). Instead of crashing, it analyzed the file header, realized it needed `ffmpeg`, found it wasn’t installed, used `curl` to send it to OpenAI’s Whisper API, and replied to Peter. It did all this autonomously. That was the spark that proved this wasn’t just a chatbot—it was an agent with problem-solving capabilities.


    The Philosophy of the Lobster: Why OpenClaw Won

    In a sea of corporate, sanitized AI tools, OpenClaw won because it was weird.

    Peter intentionally infused the project with “soul.” While tools like GitHub Copilot or ChatGPT are designed to be helpful but sterile, OpenClaw (originally “Claude’s,” a play on “Claws”) was designed to be a “Space Lobster in a TARDIS”.

    The soul.md File

    At the heart of OpenClaw’s personality is a file called soul.md. This is the agent’s constitution. Unlike Anthropic’s “Constitutional AI,” which is hidden, OpenClaw’s soul is modifiable. It even wrote its own existential disclaimer:

    “I don’t remember previous sessions… If you’re reading this in a future session, hello. I wrote this, but I won’t remember writing it. It’s okay. The words are still mine.”

    This mix of high-utility code and “high-art slop” created a cult following. It wasn’t just software; it was a character.


    The “Molt” Saga: A Trademark War & Crypto Snipers

    The projects massive success drew the attention of Anthropic, the creators of the “Claude” model. They politely requested a name change to avoid confusion. What should have been a simple rebrand turned into a cybersecurity nightmare.

    The 5-Second Snipe

    Peter attempted to rename the project to “MoltBot.” He had two browser windows open to execute the switch. In the five seconds it took to move his mouse from one window to another, crypto scammers “sniped” the account name.

    Suddenly, the official repo was serving malware and promoting scam tokens. “Everything that could go wrong, did go wrong,” Steinberger recalled. The scammers even sniped the NPM package in the minute it took to upload the new version.

    The Manhattan Project

    To fix this, Peter had to go dark. He planned the rename to “OpenClaw” like a military operation. He set up a “war room,” created decoy names to throw off the snipers, and coordinated with contacts at GitHub and X (Twitter) to ensure the switch was atomic. He even called Sam Altman personally to check if “OpenClaw” would cause issues with OpenAI (it didn’t).


    Agentic Engineering vs. “Vibe Coding”

    Steinberger offers a crucial distinction for developers entering this new era. He rejects the term “vibe coding” (coding by feel without understanding) and proposes Agentic Engineering.

    The Empathy Gap

    Successful Agentic Engineering requires empathy for the model.

    • Tabula Rasa: The agent starts every session with zero context. It doesn’t know your architecture or your variable names.
    • The Junior Dev Analogy: You must guide it like a talented junior developer. Point it to the right files. Don’t expect it to know the whole codebase instantly.
    • Self-Correction: Peter often asks the agent, “Now that you built it, what would you refactor?” The agent, having “felt” the pain of the build, often identifies optimizations it couldn’t see at the start.

    Codex (German) vs. Opus (American)

    Peter dropped a hilarious but accurate analogy for the two leading models:

    • Claude Opus 4.6: The “American” colleague. Charismatic, eager to please, says “You’re absolutely right!” too often, and is great for roleplay and creative tasks.
    • GPT-5.3 Codex: The “German” engineer. Dry, sits in the corner, doesn’t talk much, reads a lot of documentation, but gets the job done reliably without the fluff.

    The End of Apps & The Future of Software

    Perhaps the most disruptive insight from the interview is Steinberger’s view on the app economy.

    “Why do I need a UI?”

    He argues that 80% of apps will disappear. If an agent has access to your location, your health data, and your preferences, why do you need to open MyFitnessPal? The agent can just log your calories based on where you ate. Why open Uber Eats? Just tell the agent “Get me lunch.”

    Apps that try to block agents (like X/Twitter clipping API access) are fighting a losing battle. “If I can access it in the browser, it’s an API. It’s just a slow API,” Peter notes. OpenClaw uses tools like Playwright to simply click “I am not a robot” buttons and scrape the data it needs, regardless of developer intent.


    Thoughts: The “Mourning” of the Craft

    Steinberger touched on a poignant topic for developers: the grief of losing the craft of coding. For decades, programmers have derived identity from their ability to write syntax. As AI takes over the implementation, that identity is under threat.

    But Peter frames this not as an end, but an evolution. We are moving from “programmers” to “builders.” The barrier to entry has collapsed. The bottleneck is no longer your ability to write Rust or C++; it is your ability to imagine a system and guide an agent to build it. We are entering the age of the System Architect, where one person can do the work of a ten-person team.

    OpenClaw is not just a tool; it is the first true operating system for this new reality.

  • Ben Thompson on the Future of AI Ads, The SaaS Reset, and The TSMC Bottleneck

    Ben Thompson, the author of Stratechery and widely considered the internet’s premier tech analyst, recently joined John Collison for a wide-ranging discussion on the Stripe YouTube channel. The conversation serves as a masterclass on the mechanics of the internet economy, covering everything from why Taiwan is the “most convenient place to live” to the existential threat facing seat-based SaaS pricing.

    Thompson, known for his Aggregation Theory, offers a contrarian defense of advertising, a grim prediction for chip supply in 2029, and a nuanced take on why independent media bundles (like Substack) rarely work for the top tier.

    TL;DW (Too Long; Didn’t Watch)

    The Core Thesis: The tech industry is undergoing a structural reset. Public markets are right to devalue SaaS companies that rely on seat-based pricing in an AI world. Meanwhile, the “AI Revolution” is heading toward a hardware cliff: TSMC is too risk-averse to build enough capacity for 2029, meaning Hyperscalers (Amazon, Google, Microsoft) must effectively subsidize Intel or Samsung to create economic insurance. Finally, the best business model for AI isn’t subscriptions or search ads—it’s Meta-style “discovery” advertising that anticipates user needs before they ask.


    Key Takeaways

    • Ads are a Public Good: Thompson argues that advertising is the only mechanism that allows the world’s poorest users to access the same elite tools (Search, Social, AI) as the world’s richest.
    • Intent vs. Discovery: Putting banner ads in an AI chat (Intent) is a terrible user experience. Using AI to build a profile and show you things you didn’t know you wanted (Discovery/Meta style) is the holy grail.
    • The SaaS “Correction”: The market isn’t canceling software; it’s canceling the “infinite headcount growth” assumption. AI reduces the need for junior seats, crushing the traditional per-seat pricing model.
    • The TSMC Risk: TSMC operates on a depreciation-heavy model and will not overbuild capacity without guarantees. This creates a looming shortage. Hyperscalers must fund a competitor (Intel/Samsung) not for geopolitics, but for capacity assurance.
    • The Media Pond Theory: The internet allows for millions of niche “ponds.” You don’t want to be a small fish in the ocean; you want to be the biggest fish in your own pond.
    • Stripe Feedback: In a candid moment, Thompson critiques Stripe’s ACH implementation, noting that if a team add-on fails, the entire plan gets canceled—a specific pain point for B2B users.

    Detailed Summary

    1. The Geography of Convenience: Why Taiwan Wins

    The conversation begins with Thompson’s adopted home, Taiwan. He describes it as the “most convenient place to live” on Earth, largely due to mixed-use urban planning where residential towers sit atop commercial first floors. Unlike Japan, where navigation can be difficult for non-speakers, or San Francisco, where the restaurant economy is struggling, Taiwan represents the pinnacle of the “Uber Eats” economy.

    Thompson notes that while the buildings may look dilapidated on the outside (a known aesthetic quirk of Taipei), the interiors are palatial. He argues that Taiwan is arguably the greatest food delivery market in history, though this efficiency has a downside: many physical restaurants are converting into “ghost kitchens,” reducing the vibrancy of street life.

    2. Aggregation Theory and the AI Ad Model

    The most controversial part of Thompson’s analysis is his defense of advertising. While Silicon Valley engineers often view ads as a tax on the user experience, Thompson views them as the engine of consumer surplus. He distinguishes between two very different types of advertising for the AI era:

    • The “Search” Model (Google/Amazon): This captures intent. You search for a winter jacket; you get an ad for a winter jacket. Thompson argues this is bad for AI Chatbots because it feels like a conflict of interest. If you ask ChatGPT for an answer, and it serves you a sponsored link, you trust the answer less.
    • The “Discovery” Model (Meta/Instagram): This creates demand. The algorithm knows you so well that it shows you a winter jacket in October before you realize you need one.

    The Opportunity: Thompson suggests that Google’s best play is not to put ads inside Gemini, but to use Gemini usage data to build a deeper profile of the user, which they can then monetize across YouTube and the open web. The “perfect” AI ad doesn’t look like an ad; it looks like a helpful suggestion based on deep, anticipatory profiling.

    3. The “End” of SaaS and Seat-Based Pricing

    Is SaaS canceled? Thompson argues that the public markets are correctly identifying a structural weakness in the SaaS business model: Headcount correlation.

    For the last decade, SaaS valuations were driven by the assumption that companies would grow indefinitely, hiring more people and buying more “seats.” AI disrupts this.

    “If an agent can do the work, you don’t need the seat. And if you don’t need the seat, the revenue contraction for companies like Salesforce or Box could be significant.”

    The “Systems of Record” (databases, HR/Workday) are safe because they are hard to rip out. But “Systems of Engagement” that charge per user are facing a deflationary crisis. Thompson posits that the future is likely usage-based or outcome-based pricing, not seat-based.

    4. The TSMC Bottleneck (The “Break”)

    Perhaps the most critical macroeconomic insight of the interview is what Thompson calls the “TSMC Break.”

    Logic chip manufacturing (unlike memory chips) is not a commodity market; it’s a monopoly run by TSMC. Because building a fab costs billions in upfront capital depreciation, TSMC is financially conservative. They will not build a factory unless the capacity is pre-sold or guaranteed. They refuse to hold the bag on risk.

    The Prediction: Thompson forecasts a massive chip shortage around 2029. The current AI boom demands exponential compute, but TSMC is only increasing CapEx incrementally.

    The Solution: The Hyperscalers (Microsoft, Amazon, Google) are currently giving all their money to TSMC, effectively funding a monopoly that is bottlenecking them. Thompson argues they must aggressively subsidize Intel or Samsung to build viable alternative fabs. This isn’t about “patriotism” or “China invading Taiwan”—it is about economic survival. They need to pay for capacity insurance now to avoid a revenue ceiling later.

    5. Media Bundles and the “Pond” Theory

    Thompson reflects on the success of Stratechery, which was the pioneer of the paid newsletter model. He utilizes the “Pond” analogy:

    “You don’t want to be in the ocean with Bill Simmons. You want to dig your own pond and be the biggest fish in it.”

    He discusses why “bundling” writers (like a Substack Bundle) is theoretically optimal but practically impossible.

    The Bundle Paradox: Bundles work best when there are few suppliers (e.g., Spotify negotiating with 4 music labels). But in the newsletter economy, the “Whales” (top writers) make more money going independent than they would in a bundle. Therefore, a bundle only attracts “Minnows” (writers with no audience), making the bundle unattractive to consumers.


    Rapid Fire Thoughts & “Hot Takes”

    • Apple Vision Pro: A failure of imagination. Thompson critiques Apple for using 2D television production techniques (camera cuts) in a 3D immersive environment. “Just let me sit courtside.”
    • iPhone Air: Thompson claims the new slim form factor is the “greatest smartphone ever made” because it disappears into the pocket, marking a return to utility over spec-bloat.
    • Tik Tok: The issue was never user data (which is boring vector numbers); the issue was always algorithm control. The US failed to secure control of the algorithm in the divestiture talks, which Thompson views as a disaster.
    • Crypto: He remains a “crypto defender” because, in an age of infinite AI-generated content, cryptographic proof of authenticity and digital scarcity becomes more valuable, not less.
    • Work/Life Balance: Thompson attributes his success to doubling down on strengths (writing/analysis) and aggressively outsourcing weaknesses (he has an assistant manage his “Getting Things Done” file because he is incapable of doing it himself).

    Thoughts and Analysis

    This interview highlights why Ben Thompson remains the “analyst’s analyst.” While the broader market is obsessed with the capabilities of AI models (can it write code? can it make art?), Thompson is focused entirely on the value chain.

    His insight on the Ad-Funded AI future is particularly sticky. We are currently in a “skeuomorphic” phase of AI, trying to shoehorn chatbots into search engine business models. Thompson’s vision—that AI will eventually know you well enough to skip the search bar entirely and simply fulfill desires—is both utopian and dystopian. It suggests that the privacy wars of the 2010s were just the warm-up act for the AI profiling of the 2030s.

    Furthermore, the TSMC warning should be a flashing red light for investors. If the physical layer of compute cannot scale to meet the software demand due to corporate risk aversion, the “AI Bubble” might burst not because the tech doesn’t work, but because we physically cannot manufacture the chips to run it at scale.

  • The Official Obsidian CLI: A Comprehensive Guide

    The Obsidian CLI allows you to control the Obsidian desktop application directly from your terminal. Whether you want to script daily backups, pipe system logs into your daily notes, or develop plugins faster, the CLI bridges the gap between your shell and your knowledge base.

    ⚠️ Early Access Warning: As of February 2026, the Obsidian CLI is in Early Access. You must be running Obsidian v1.12+ and hold a Catalyst license to use these features.


    1. Prerequisites & Installation

    Before you begin, ensure you meet the requirements:

    • Obsidian Version: v1.12.x or higher (Early Access).
    • License: Catalyst License (required for early access builds).
    • State: Obsidian must be running (the CLI connects to the active app instance).

    Setup Steps

    1. Update Obsidian: Go to Help → Check for updates. Ensure you are on the latest installer (v1.11.7+) and update to the v1.12.x early access build.
    2. Enable the CLI:
      • Open Settings → General.
      • Scroll to “Command line interface” and toggle it On.
      • Follow the prompt to “Register” the CLI. This sets up the necessary PATH variables.
    3. Restart Terminal: You must restart your terminal session for the new PATH variables to take effect.
    4. Verify: Run obsidian help. If you see a command list, you are ready.

    2. Core Concepts & Syntax

    The CLI operates in two modes: Single Command (for scripting) and Interactive TUI (for exploration).

    Interactive Mode (TUI)

    Simply type obsidian and hit enter.

    • Features: Autocomplete, command history (Up/Down arrows), and reverse search (Ctrl+R).
    • Usage: Type commands without the obsidian prefix (e.g., just daily).

    Command Structure

    The general syntax for single commands is:

    obsidian <command> [parameters] [flags]

    Parameters & Flags

    • Parameters (key=value): Quote values if they contain spaces.

      Example: obsidian create name="My Note" content="Hello World"

      Multiline: Use \n for newlines.

    • Flags: Boolean switches to change behavior.
      • --silent: Suppress output/window focusing.
      • --copy: Copy the output to the system clipboard.
      • --overwrite: Force an overwrite if a file exists.

    Targeting Vaults & Files

    • Vault Selection:
      • Default: Uses the vault in your current working directory. If not in a vault, uses the active Obsidian window.
      • Explicit: obsidian vault="My Vault" daily
    • File Selection:
      • Wikilink Style: file=Recipe (Resolves just like [[Recipe]]).
      • Exact Path: path="Folder/Subfolder/Note.md" (Relative to vault root).

    3. Essential Workflows

    Daily Notes Management

    The CLI excels at quick capture and logging without breaking your flow.

    Open Today’s Note:

    obsidian daily

    Quick Capture (Append):
    Adds text to the end of the note without opening the window.

    obsidian daily:append content="- [ ] Call Client regarding Project X" silent

    File Operations

    Create a Note:

    obsidian create name="Project Alpha" content="# Goals\n1. Launch"

    Search & Copy:
    Finds notes containing “TODO” and copies the list to your clipboard.

    obsidian search query="TODO" --copy

    Version Control

    Diff Versions:

    # Compare current file to previous version
    obsidian diff file=Recipe from=1

    4. Automation & Scripting Patterns

    These patterns are ideal for shell scripts (.sh) or launchers like Alfred/Raycast.

    Pattern A: The “Inbox” Scraper

    Create a system-wide hotkey that runs this script to capture ideas instantly:

    # Appends to daily note with a timestamp
    timestamp=$(date +%H:%M)
    obsidian daily:append content="- $timestamp: $1" silent

    Pattern B: Automated Reporting

    Generate a file based on system data.

    # Create a note with directory listing
    ls -la | obsidian create name="System Log" --stdin

    5. Troubleshooting by OS

    Windows

    Windows requires a specialized redirector because Obsidian is a GUI app.

    Fix: You may need the Obsidian.com file (available via the Catalyst Discord). Place this file alongside Obsidian.exe in your installation directory.

    macOS

    Registration usually handles this automatically. If it fails:

    Fix: Add the following to your ~/.zprofile or ~/.bash_profile:

    export PATH="$PATH:/Applications/Obsidian.app/Contents/MacOS"

    Linux

    Fix: If the symlink is missing, create it manually:

    sudo ln -s /path/to/Obsidian-AppImage /usr/local/bin/obsidian

    Command Reference Cheat Sheet

    Category Command Example Usage
    General open, search obsidian open file="Project A"
    Daily daily, daily:append obsidian daily:prepend content="Urgent!"
    Files create, move obsidian create name="Log" overwrite
    Reading read, outline obsidian read file=Recipe

    Note: Commands and syntax are subject to change during Early Access. Always rely on obsidian help within your specific build.

  • Inside X with Nikita Bier: Viral Growth, Elon Musk, and “Doing the Hard Thing”

    In a recent episode of the Out of Office podcast, Lightspeed partner Michael Mignano sat down with Nikita Bier, the Head of Product at X (formerly Twitter). Filmed in Bier’s hometown of Redondo Beach, California, the interview offers a rare, candid look into the chaotic, high-stakes world of running product at one of the world’s most influential platforms.

    Bier, famous for founding the viral apps TBH and Gas, discusses everything from his unorthodox hiring by Elon Musk to the specific growth hacks being used to revitalize a 20-year-old platform. Here is a breakdown of the conversation.


    TL;DW (Too Long; Didn’t Watch)

    • The Hire: Elon Musk hired Nikita via DM. The “interview” was a 48-hour sprint to redesign the app’s onboarding flow, which Nikita presented to Elon at 2:00 AM.
    • The Role: Bier describes his job as “customer support for 500 million people” and admits he acts as the company mascot/punching bag.
    • The Culture: X runs like a seed-stage startup. There are roughly 30 core product engineers, very few managers, and a flat hierarchy.
    • Growth Strategy: The team is focusing on “Starter Packs” to help new users find niche communities (like Peruvian politics or plumbing) rather than just general tech/news content.
    • Elon’s Management: Musk is deeply involved in engineering reviews and consistently pushes the team to “do the hard thing” rather than take shortcuts for quick growth.

    Key Takeaways

    1. Think Like an Adversary

    Bier credits his early days as a “script kiddie” hacking AOL and building phishing sites (for educational purposes, mostly) as the foundation for his product sense. He argues that understanding how to break a system is essential for building consumer products. This “adversarial” mindset helps in preventing spam, but it is also the secret to growth—understanding exactly how funnels work and how to optimize them to the extreme.

    2. The “Build in Public” Double-Edged Sword

    Nikita is a prolific poster on X, often testing feature ideas in real-time. This creates an incredibly tight feedback loop where bugs are reported seconds after launch. However, it also makes him a target. He recounted the “Crypto Twitter” incident where a critique of “GM” (Good Morning) posts led to him being meme-d as a pig for a week. The sentiment only flipped when X shipped useful features like anti-spam measures and financial charts.

    3. Fixing the Link Problem

    One of the biggest recent product changes involved how X handles external links. Historically, social platforms downrank links to keep users on-site. Bier helped design a new UI where the engagement buttons (Like, Repost) remain visible while the user reads the article in the in-app browser. This allows X to capture engagement signals on external content, meaning the algorithm can finally properly rank high-quality news and articles without penalizing creators.

    4. Identity and Verification

    To combat political misinformation without compromising free speech, X launched “Country of Origin” labels. Bier explained that this allows users to see if a political opinion is coming from a local citizen or a “grifter” farm in a different country, providing context rather than censorship.


    Detailed Summary

    From TBH to X

    The interview traces Bier’s history of building viral hits. He famously sold his app TBH (a positive polling app for teens) to Facebook, and years later, built Gas (effectively the same concept) and sold it to Discord. He dispelled the myth that he simply “sold the same app twice,” noting that while the mechanics were similar, the growth engines and social graph integrations had to be completely reinvented for a new generation.

    The Musk Methodology

    Bier provides a fascinating look at Elon Musk’s leadership style. Contrary to the idea of a distant executive, Musk conducts weekly reviews with engineers where they present their code and progress directly. Bier noted that Musk has a high tolerance for pain if it means long-term stability. For example, rewriting the entire recommendation algorithm or moving data centers in mere months—projects that would take years at Meta or Google—were executed rapidly because Musk insisted on “doing the hard thing.”

    Reviving a 20-Year-Old Platform

    The core challenge at X is growth. The app has billions of dormant accounts. Bier’s strategy relies on “resurrection”—bringing old users back by showing them that X isn’t just for news, but for specific interests. This led to the creation of Starter Packs, which curate lists of accounts for specific niches. The result has been a doubling of time spent for new users.

    The Financial Future

    Bier teased upcoming features that align with Musk’s vision of an “everything app.” This includes Smart Cashtags, which allow users to pull up real-time financial data and charts within the timeline. The long-term goal is to enable transactions directly on the platform, allowing users to buy products or tip creators seamlessly.


    Thoughts

    What stands out most in this interview is the sheer precariousness of Nikita Bier’s position. He is attempting to apply “growth hacking” principles—usually reserved for fresh, nimble startups—to a massive, entrenched legacy platform. The fact that the core engineering team is only around 30 people is staggering when compared to the thousands of engineers at Meta or TikTok.

    Bier represents a new breed of product executive: the “poster-operator.” He doesn’t hide behind corporate comms; he engages in the muddy waters of the platform he builds. While this invites toxicity (and the occasional death threat, which he mentions casually), it affords X a speed of iteration that is unmatched in the industry. If X succeeds in revitalizing its growth, it will likely be because they treated the platform not as a museum of the internet, but as a product that still needs to find product-market fit every single day.

  • Super Bowl LX (2026) By The Numbers: Production Stats, Camera Tech & Record Ad Prices

    Date: February 8, 2026
    Location: Levi’s Stadium, Santa Clara
    Matchup: Seattle Seahawks vs. New England Patriots

    As kickoff approaches, NBC, Peacock, and Telemundo are set to deliver the most technologically advanced broadcast in NFL history. Below is the breakdown of the massive production numbers defining today’s event.

    The Cost of a 30-Second Spot

    The price of airtime for Super Bowl LX has broken all previous records. NBCUniversal confirmed that inventory sold out as early as September.

    • Premium Spots: A handful of prime 30-second slots have sold for over $10 million.
    • Average Price: The average cost for a standard 30-second commercial is approximately $8 million.
    • Comparison: This is a significant jump from the $7 million average seen just two years ago.

    The Visual Arsenal: Cameras & Tech

    NBC has deployed 145 dedicated cameras. When including venue support, Sony reports over 175 total cameras are active inside the stadium.

    • Game Coverage: 81 cameras trained solely on the field.
    • Pre-Game: 64 cameras dedicated exclusively to the build-up.
    • Specialty Angles: Includes two SkyCams (one “High Sky” for tactical views) and 18 POV cameras.
    • Cinematic Style: The production is using Sony Venice 2 and Burano cinema cameras for the Halftime Show to provide a movie-like depth of field.

    The Infrastructure & Connectivity

    To connect this massive visual network, the crew has laid approximately 75 miles (396,000 feet) of fiber-optic and camera cable throughout Levi’s Stadium.

    • Audio: 130 microphones embedded around the field to capture every hit and whistle.
    • Command Center: 22 mobile production units are parked in the broadcast compound.
    • Connectivity: A massive 5G upgrade allowing for median download speeds of 1.4 Gbps for fans inside the venue.

    The Workforce & Attendance

    • Staff: Over 700 NBC Sports employees are on-site to manage the broadcast.
    • Talent: Mike Tirico (Play-by-Play), Cris Collinsworth (Analyst), Melissa Stark & Kaylee Hartung (Sideline).
    • Attendance: Expected crowd of 65,000 to 70,000 fans.

    The Entertainment Lineup


    Sources & Further Reading