Peter Steinberger is joining OpenAI to drive the next generation of personal agents. He is a genius with a lot of amazing ideas about the future of very smart agents interacting with each other to do very useful things for people. We expect this will quickly become core to our…
In a move that underscores the intensifying race to dominate AI agent technology, OpenAI has brought aboard Peter Steinberger, the visionary Austrian developer behind the viral open-source project OpenClaw. As reported by Reuters, Fortune, and TechCrunch, the deal was announced on February 15, 2026. This isn’t a conventional acquisition but an “acquihire,” where Steinberger joins OpenAI to spearhead the development of next-generation personal AI agents.
Meanwhile, OpenClaw transitions to an independent foundation, remaining fully open-source with continued support from OpenAI (confirmed via Steinberger’s Blog and LinkedIn). This strategic alignment comes amid soaring interest in AI agents, a market projected by AInvest to hit $52.6 billion by 2030 with a 46.3% compound annual growth rate.
The announcement, made via a post on X by OpenAI CEO Sam Altman around 21:39 GMT, arrived just hours before widespread media coverage from outlets like Fortune. Steinberger swiftly confirmed the news in a personal blog post, emphasizing his excitement for the future while reaffirming OpenClaw’s independence.
The Rise of OpenClaw: From Playground Project to Phenomenon
OpenClaw, originally launched as Clawdbot in November 2025—a playful nod to Anthropic’s Claude model—quickly evolved into a powerhouse open-source AI agent framework designed for personal use (Fortune, Steinberger’s Blog, APIYI). Steinberger, who “vibe coded” the project solo after a three-year hiatus following the sale of his previous company for over $100 million, saw it explode in popularity. It amassed over 100,000 GitHub stars, drew 2 million visitors in a week, and became the fastest-growing repo in GitHub history—surpassing milestones of projects like React and Linux (Yahoo Finance, LinkedIn).
A trademark dispute with Anthropic prompted renames: first to Moltbot (evoking metamorphosis), then to OpenClaw in early 2026. The framework empowers AI to autonomously handle tasks on users’ devices, fostering a community focused on data ownership and multi-model support.
Key capabilities that fueled its hype include:
Managing emails and inboxes.
Booking flights, restaurant reservations, and flight check-ins.
Interacting with services like insurers.
Integrating with apps such as WhatsApp and Slack for task delegation.
Creating a “social network” for AI agents via features like Moltbook, which spawned 1.6 million agents (Source).
Despite its success, sustainability proved challenging. Steinberger personally shouldered infrastructure costs of $10,000 to $20,000 monthly, routing sponsorships to dependencies rather than himself, even as donations and corporate support (including from OpenAI) trickled in.
The Path to the Deal: Billion-Dollar Bids and Open-Source Principles
Prior to the announcement, Steinberger fielded billion-dollar acquisition offers from tech giants Meta and OpenAI (Yahoo Finance). Meta’s Mark Zuckerberg personally messaged Steinberger on WhatsApp, sparking a 10-minute debate over AI models, while OpenAI’s Sam Altman offered computational resources via a Cerebras partnership to boost agent performance. Meta aggressively pursued Steinberger and his team, but OpenAI advanced in talks to hire him and key contributors.
Steinberger spent the preceding week in San Francisco meeting AI labs, accessing unreleased research. He insisted any deal preserve OpenClaw’s open-source nature, likening it to Chrome and Chromium. Ultimately, OpenAI’s vision aligned best with his goal of accessible agents.
Key Announcements and Voices from the Frontlines
Sam Altman, in his X post on February 15, 2026, hailed Steinberger as a “genius with a lot of amazing ideas about the future of very smart agents interacting with each other to do very useful things for people.” He added, “We expect this will quickly become core to our product offerings. OpenClaw will live in a foundation as an open source project that OpenAI will continue to support. The future is going to be extremely multi-agent and it’s important to us to support open source as part of that.”
Steinberger’s blog post echoed this enthusiasm: “tl;dr: I’m joining OpenAI to work on bringing agents to everyone. OpenClaw will move to a foundation and stay open and independent. The last month was a whirlwind… When I started exploring AI, my goal was to have fun and inspire people… My next mission is to build an agent that even my mum can use… I’m a builder at heart… What I want is to change the world, not build a large company… The claw is the law.”
Strategic Implications: Opportunities and Challenges Ahead
For OpenAI, this bolsters their AI agent push, potentially accelerating consumer-grade solutions and addressing barriers like setup complexity and security. It positions them in the “personal agent race” against Meta, emphasizing multi-agent systems. The broader AI agents market could reach $180 billion by 2033, driving undisclosed but likely substantial financial terms.
OpenClaw benefits from foundation status (akin to the Linux Foundation), ensuring independence and community focus with OpenAI’s sponsorship.
However, risks loom large. OpenClaw’s “unfettered access” to devices raises security concerns, including data breaches and rogue actions—like one incident of spamming hundreds of iMessages. China’s industry ministry warned of cyberattack vulnerabilities if misconfigured. Steinberger aims to prioritize safety and accessibility.
Community Pulse: Excitement, Skepticism, and Satire
Reactions on X blend hype and caution. Cointelegraph noted the move as a “big move” for ecosystems. One user called it the “birth of the agent era,” while another satirically predicted a shift to “ClosedClaw.” Fears of closure persist, but congratulations abound, with some viewing Anthropic’s trademark push as a “fumble.”
LinkedIn’s Reyhan Merekar praised Steinberger’s solo feat: “Literally coding alone at odd hours… Faster than React, Linux, and Kubernetes combined.”
Beyond the Headlines: Vision and Value
Steinberger’s core vision: Agents for all, even non-tech users, with emphasis on safety, cutting-edge models, and impact over empire-building. OpenClaw’s strengths—model-agnostic design, delegation-focused UX, and persistent memory—eluded even well-funded labs.
As of February 15, 2026, this marks a pivotal moment in AI’s evolution, blending open innovation with corporate muscle. No further updates have emerged, but the multi-agent future Altman envisions is accelerating.
Inside the 243-gram flying cameras chasing Olympic gold at 90 mph — the pilots, the tech, the controversy, and why Milano Cortina 2026 will be remembered as the “Drone Games.”
TLDR
The Milano Cortina 2026 Winter Olympics have deployed FPV (First-Person View) drones as a core broadcast tool for the first time in Winter Games history. A fleet of 25 custom-built drones — weighing just 243 grams each and capable of 90 mph — are chasing bobsleds through ice canyons, diving off ski jumps alongside athletes, and orbiting snowboarders mid-trick. Built by the Dutch firm “Dutch Drone Gods” and operated by former athletes turned drone pilots, the system uses a dual-feed transmission architecture that sends ultra-low-latency video to the pilot’s goggles while simultaneously beaming broadcast-quality HD to the production truck. The result is footage that makes viewers feel like they’re sitting on the athlete’s shoulder. But the revolution comes with a buzzkill — literally. The drones’ high-pitched whine has sparked a global “angry mosquito” debate, and Italian defense contractor Leonardo has erected an invisible “Electronic Dome” of radar and jamming systems over the Dolomites to keep unauthorized drones out. Love it or hate it, FPV has graduated from experiment to Olympic standard, and the 2030 French Alps Games will inherit everything Milano Cortina pioneered.
Key Takeaways
First-ever structural FPV integration at a Winter Olympics. These aren’t novelty shots — FPV is the default angle for replays and key live segments in speed disciplines at Milano Cortina 2026.
25 custom drones, 15 dedicated FPV teams. The fleet is built by Dutch Drone Gods for Olympic Broadcasting Services (OBS), each unit weighing just 243 grams with top speeds of 140 kph.
Dual-feed transmission solves the latency problem. Pilots see 15-40ms ultra-low-latency video through their goggles while a separate HD broadcast feed with 300-400ms delay goes to the production truck via COFDM signal.
Pilots are former athletes. Ex-Norwegian ski jumper Jonas Sandell flies the ski jumping coverage. He anticipates the “lift” because he’s done it himself thousands of times.
Three-person teams modeled on military aviation. Every flight requires a pilot (goggles on, zero outside awareness), a spotter (line-of-sight, abort authority), and a director (in the OB truck, calling the live cut).
The inverted propeller design is the secret weapon. Mounting motors upside-down lowers the center of gravity and lets the drone “carve” air like a skier carves snow — smoother banking, cleaner footage.
Battery life is 5 minutes in sub-zero conditions. Heated cabins keep LiPo packs at body temperature until seconds before flight. Cold batteries can voltage-sag and drop a drone mid-chase.
Leonardo’s “Electronic Dome” protects the airspace. Tactical radar, RF sniffing, and electronic jamming distinguish sanctioned drones from rogue threats. Unauthorized flight is a criminal offense.
The “angry mosquito” controversy is real. Props spinning at 30,000+ RPM emit a 400Hz-8kHz whine that cuts through the natural soundtrack of winter sports. AI audio scrubbing is in development for 2030.
93% viewership spike. 26.5 million viewers in the first five days — and FPV footage is being credited as a major factor.
The Full Story
As the 2026 Winter Olympic Games in Milano Cortina reach their halfway point, a singular technological narrative has emerged that eclipses even the introduction of ski mountaineering or the unprecedented decentralized venue structure spanning 400 kilometers of northern Italy. It’s not a new sport. It’s a new way of seeing sport.
For the first time in Winter Olympics history, First-Person View drones have been deployed not as experimental novelties bolted onto the margins of production, but as the primary architectural component of the live broadcast for speed disciplines. From the icy chutes of the Cortina Sliding Centre to the vertical drops of the Stelvio Ski Centre in Bormio, a fleet of custom-engineered, high-speed micro-drones is fundamentally altering the viewer’s relationship with gravity, velocity, and fear.
No longer tethered to fixed cable cams or zoomed-in telephoto lenses that compress depth and flatten the terror of a 90 mph descent, audiences are now riding shotgun. They’re sitting on the shoulder of a downhill skier as she threads a 2-meter gap between dolomite rock walls. They’re matching a four-man bobsled through a concrete-and-ice canyon where the walls blur into a warp-speed tunnel. They’re floating parallel to a ski jumper at the apex of a 140-meter flight, looking down at the terrifying void between athlete and earth.
This is FPV at the Olympics. And it changes everything.
The Hardware: 243 Grams of Purpose-Built Fury
The drones chasing Olympic gold are nothing like the DJI Mavic sitting in your closet. They are bespoke, purpose-built broadcast machines designed to survive a hostile alpine environment while delivering cinema-grade imagery at insane speeds. The fleet comprises approximately 25 active units with 15 dedicated FPV teams, and the hardware was developed by the Netherlands-based firm Dutch Drone Gods (DDG) in partnership with Olympic Broadcasting Services.
The engineering brief was a paradox: build something fast enough to chase a bobsled at 140 kph, yet light enough that if it ever made contact with an athlete, the damage would be survivable. The answer weighs 243 grams — just under the critical 250-gram threshold that triggers stricter aviation classification in most jurisdictions.
Core Specs at a Glance
Feature
Specification
Why It Matters
Weight
243 grams
Sub-250g classification bypasses stricter aviation rules; minimizes impact energy
Lowered center of gravity; cleaner air over props for smoother banking
Operating Temp
-20°C to +5°C
LiPo batteries pre-heated in thermal warmers to prevent voltage sag
Pilot Feed
DJI O4 Air Unit, 15-40ms latency
Reflex-speed video to goggles — the pilot’s “nervous system”
Broadcast Feed
Proton CAM Full HD Mini + Domo Pico Tx, 300-400ms latency
HD HDR signal via COFDM to production truck — the “visual cortex”
The Inverted Propeller Innovation
The single most important hardware decision DDG made was mounting the motors upside down. In a traditional drone, propellers sit above the arms and push air downward over the frame, creating turbulence. The Olympic drones flip this — motors are mounted below the arms in a “pusher” configuration.
The physics payoff is significant. When chasing a skier through a Super-G turn, the drone must bank aggressively — sometimes 60-70 degrees. The inverted design lowers the center of gravity, allowing the drone to “carve” through the air the way a ski carves through snow. The result is footage with smooth, sweeping curves that mirror the athlete’s line rather than fighting it. And because the propellers push air away from the frame rather than washing it over the body, there’s less self-induced turbulence — critical when you’re flying centimeters from ice inside a bobsled track.
The Dual-Feed Architecture: Two Brains, One Drone
Here’s the fundamental problem with live FPV broadcast: a pilot flying at 90 mph needs to see what the drone sees instantly. Even a half-second delay and you’ve already crashed. But broadcast television needs high-definition, color-corrected, HDR imagery — processing that inherently introduces latency.
The solution is elegant: each drone carries two independent transmission systems.
The pilot feed runs through a DJI O4 Air Unit at 15-40 milliseconds of latency. It’s lower resolution, optimized purely for frame rate and response time. This is the drone’s “nervous system” — raw, twitchy, and fast. Only the pilot sees it.
The broadcast feed uses a completely separate camera (Proton CAM Full HD Mini) and transmitter (Domo Pico Tx), running at 300-400ms latency via COFDM signal — a modulation scheme specifically chosen because it’s robust against the multipath interference caused by radio signals bouncing off dolomite rock walls and concrete sliding tracks. This feed goes straight to the Outside Broadcast van where it’s color-graded and cut into the world feed alongside 800 other cameras.
The result: the pilot flies on instinct while the world watches in HD. Two realities, one airframe.
The Human Element: Athletes Flying Athletes
The most fascinating aspect of the 2026 FPV program isn’t the hardware — it’s the hiring strategy. OBS and its broadcast partners realized early on that following a ski jumper off a 140-meter hill requires more than stick skills. It requires understanding what the athlete’s body is about to do before it does it.
So they recruited athletes.
Jonas Sandell is a former member of the Norwegian national ski jumping team. He now flies FPV for OBS at the Predazzo Ski Jumping Stadium. His athletic background gives him something no amount of simulator time can replicate: a proprioceptive understanding of when a jumper will “pop” off the table and transition from running to flying. He anticipates the lift phase — throttling up the drone milliseconds before the visual cue — because his own body remembers the feeling. He knows the flight envelope of a ski jumper because he used to be the flight envelope.
For the sliding sports — luge, skeleton, bobsled — the pilot known as “ShaggyFPV” from Dutch Drone Gods leads what might be the most dangerous camera crew at the Games. Flying inside the bobsled track is essentially flying inside a concrete pipe with no GPS, no stabilization assists, and a 1,500-kilogram sled bearing down at 140 kph. ShaggyFPV and his team fly up to 50 runs per session, building muscle memory of every curve and transition so deeply that the flying becomes subconscious. If a sled crashes and rides up the walls, the pilot must have a faster-than-conscious “bail out” reflex — throttle up and out of the track instantly to avoid becoming a 243-gram projectile aimed at a downed athlete.
The Three-Person Team Protocol
No FPV drone flies alone at the Olympics. Every unit operates under a strict three-person crew structure modeled on military aviation:
The Pilot — goggles on, immersed in the FPV feed, zero awareness of the physical world. They fly on reflex and audio cues.
The Spotter/Technician — maintains visual line-of-sight with the drone at all times. Monitors signal strength, battery voltage, wind, and physical hazards. Has unilateral “tap on the shoulder” authority to abort any flight, no questions asked.
The Director — sits in the warmth of the OB truck, watching the drone feed alongside 20+ other camera angles. Calls the shot: “Drone 1, stand by… and TAKE.” Coordinates the cut so the drone enters the broadcast mix at exactly the right moment.
This three-person ballet is performed hundreds of times a day across all venues. It’s the invisible choreography that makes the “wow” moments look effortless.
The Visual Philosophy: “Movement in Sport”
Mark Wallace, OBS Chief Content Officer, defined the visual strategy for 2026 with a two-word mandate: “Movement in Sport.” The goal isn’t just to show what happened. It’s to make the viewer feel what happened.
In alpine skiing, the drone doesn’t just follow — it mimics. When the skier tucks, the drone drops altitude. When the skier carves, the drone banks. The camera becomes a kinesthetic mirror, conveying the violence of the vibration and the crushing G-forces in a way that a static telephoto shot from the sideline never could.
In ski jumping, the drone tracks parallel to the athlete mid-flight, revealing the true scale of a 140-meter jump — the terrifying height, the impossible hang time, the narrow margin between textbook landing and catastrophe. Tower cameras flatten this. FPV restores it.
In the sliding sports, the FPV drone may be the only camera capable of honestly conveying speed. Fixed trackside cameras pan so fast the sled blurs into abstraction. But the drone matches velocity, keeping the sled in razor-sharp focus while the ice walls dissolve into a warp-speed tunnel around it. For the first time, viewers at home can viscerally understand why bobsled pilots describe their sport as “controlled falling.”
And in snowboard and freestyle at Livigno, the pilots have creative license to orbit athletes mid-trick, creating real-time “Bullet Time” effects that would have required a Hollywood rig and months of post-production just a decade ago.
Venue by Venue: Where FPV Shines (and Struggles)
Milano Cortina 2026 is the most geographically dispersed Olympics in history, with venues stretching across hundreds of kilometers of northern Italy. Each location presents unique challenges that force the FPV teams to adapt their hardware, techniques, and risk calculus.
Bormio — The Vertical Wall
The Stelvio Ski Centre hosts men’s alpine skiing on one of the steepest, iciest, most terrifying courses in the world. The north-facing slope sits in perpetual shadow. Pilots switch to heavier 7-inch drone configurations here to fight the brutal updrafts on the exposed upper mountain. The “San Pietro” jump — one of the Stelvio’s signature features — requires the drone to dive with the skier off a cliff at 140 kph, judging the athlete’s speed with centimeter-level precision. Too slow and the skier vanishes. Too fast and the shot is ruined.
Cortina d’Ampezzo — The Amphitheater
At the Olympia delle Tofane, women’s alpine skiing threads through massive dolomite rock formations. The challenge here is dual: RF multipath (radio signals bouncing off rock walls threaten to break up the video feed) and extreme light contrast (bright sun to deep rock shadow in seconds). The COFDM transmission system earns its keep here, and technicians in the truck ride the iris and ISO controls like a musician riding a fader.
The Cortina Sliding Centre is the most technically demanding FPV environment at the Games. A concrete and ice canyon with no GPS signal. Pilots fly purely on muscle memory in Acro mode — no stabilization, no computer assistance, just stick and reflex. Every flight carries an abort plan because if a sled crashes, the drone needs to exit the track faster than human thought.
Livigno — The Playground
The open terrain of the Livigno Snow Park is where FPV gets to play. In Big Air, drones orbit rotating athletes. In Slopestyle, they chase riders across sequences of rails and jumps. When a rider checks speed to set up a trick, the drone “yaws” — turning sideways to increase drag and bleed speed instantly. It’s the most creatively expressive FPV work at the Games.
Milan — The Indoor Frontier
The most experimental deployment is indoors at the Mediolanum Forum for speed skating. Metal stadium beams create RF havoc, reflecting signals and causing video breakup. The solution: specialized RF repeaters and miniaturized 2.5-inch shrouded Cinewhoops safe to fly near crowds. The drones track skaters from inside the oval, revealing the tactical chess of team pursuit events in a way overhead cameras never could. Pilots fly in full manual mode with the compass disabled — the steel structure would send a magnetometer haywire.
The Physics Problem: Flying Fast in Thin, Frozen Air
Flying a 243-gram drone at 2,300 meters above sea level in -20°C is not the same as flying it in a parking lot in the Netherlands. The physics conspire against you at every level.
Thin air. At the Bormio start elevation of 2,255 meters, air density is significantly lower than at sea level. Propellers generate lift by moving air, and when there’s less air to move, the props must spin faster. This draws more current, drains batteries faster, and makes the drone feel “looser” — less grip on the air, harder to hold tight lines. The DDG drones use high-pitch propellers and high-KV motors that bite aggressively into the thin atmosphere to compensate.
Cold batteries. Lithium-polymer battery chemistry slows down as temperature drops. Internal resistance rises. When the pilot punches the throttle to chase a skier out of the start gate, the battery voltage can plummet — a phenomenon called “voltage sag” — potentially triggering a low-voltage cutoff that kills the drone mid-flight. The “Heated Cabin” protocol is not a comfort measure; it’s mission-critical. Batteries are stored at body temperature (~37°C) in thermal warmers until the final seconds before flight, and packs are swapped every two runs even if they’re not fully depleted.
Blinding contrast. The visual environment of winter sports is an exposure nightmare: blinding white snow and ink-black shadows from rock formations. The Proton CAM was selected specifically for its HDR capability, resolving detail in both extremes simultaneously. But it’s not set-and-forget — technicians in the truck ride the exposure adjustments in real-time as the drone descends from sun to shadow and back.
The Electronic Dome: Security in the Sky
While OBS drones are the stars of the broadcast, they fly in one of the most securitized airspaces on the planet. The Alps present a defender’s nightmare: valleys provide radar shadows where a rogue drone can launch from a hidden floor, pop over a ridge, and be over a stadium in seconds.
Italian defense giant Leonardo, appointed as Premium Partner for security and mission-critical communications, has erected a multi-layered Counter-UAS defense grid — an invisible “Electronic Dome” — over every venue.
The system works in three phases:
Detection. Tactical Multi-mission Radar (TMMR) — an AESA array optimized for “low, slow, and small” targets — scans the mountain clutter for anything that shouldn’t be there. Simultaneously, passive RF sensors listen for the telltale handshake signals between a remote controller and a drone.
Classification. Once detected, the system must instantly determine friend or foe. OBS drones broadcast specific Remote ID signatures and operate on reserved, whitelisted frequencies coordinated with ENAC (the Italian Civil Aviation Authority). Anything detected outside the predefined 3D geofences is flagged as hostile.
Mitigation. At an Olympic venue, you can’t shoot a drone down — falling debris over a crowd of thousands is not an option. Instead, Leonardo’s Falcon Shield technology performs a “soft kill,” flooding the rogue drone’s control frequencies (2.4GHz / 5.8GHz) with electronic noise. With its link severed, most consumer drones hover momentarily and then execute a Return-to-Home. Tactical teams on the ground carry handheld jamming rifles for close-range backup.
ENAC has designated all Olympic venues as temporary “Red Zones” from February 6-22. Unauthorized drone flight in these zones isn’t a civil fine — it’s a criminal offense under the Games’ National Security designation. The US Diplomatic Security Service has gone so far as to warn American travelers that Italy will enforce strict bans and anticipates at least one “high profile drone incursion.”
The Angry Mosquito: FPV’s Buzzing Controversy
For all the visual brilliance, the FPV revolution has a PR problem — and it sounds like an angry insect trapped in your living room.
Small propellers spinning at 30,000+ RPM generate a high-frequency whine in the 400Hz-8kHz range. This is precisely the frequency band where human hearing is most sensitive (we evolved to find high-pitched buzzing irritating — thanks, mosquitoes). The drone’s whine cuts through the natural soundtrack of winter sports: the roar of edges on ice, the whoosh of wind, the crunch of snow, the silence of flight. In some broadcast feeds, the drone noise overpowers everything else.
Traditionalists argue the footage, while undeniably dynamic, can be disorienting — a “video game aesthetic” that detracts from the gravity of the Olympic moment. Others counter that the immersion is worth the acoustic cost.
OBS CEO Yiannis Exarchos has publicly acknowledged the problem. Engineers are testing AI audio filters that can “fingerprint” the specific waveform of the DDG drone motors and subtract them from the live mix in real-time — essentially noise-canceling headphones for the broadcast. The technology isn’t fully deployed for every event in 2026, but OBS views it as a mandatory requirement for the 2030 French Alps Games.
The Road Here: A Brief History of Olympic Drones
Milano Cortina didn’t happen overnight. The path from aerial curiosity to broadcast infrastructure took 12 years and four Olympic cycles.
Sochi 2014: Drones debuted as flying tripods — slow, heavy multi-rotors capturing landscape “establishing shots” of the Caucasus Mountains. They couldn’t follow athletes and had unpredictable battery life in the Russian cold.
PyeongChang 2018: The 1,200-drone Intel Shooting Star light show at the Opening Ceremony was spectacular, but it was performance art, not sports coverage. Broadcast drones remained stuck on scenic B-roll.
Beijing 2022: COVID restrictions accelerated remote camera technology. Drones were used more aggressively in cross-country skiing and biathlon, but still as “high-eye” perspectives looking down. The latency barrier for close-proximity FPV hadn’t been cracked for broadcast-grade reliability.
Paris 2024: The breakthrough. OBS tested FPV in mountain biking and urban sports, proving the hybrid dual-feed transmission model worked in live production. The critical lesson: FPV pilots need to understand the sport, not just the stick. This directly shaped the athlete-recruitment strategy for 2026.
Milano Cortina 2026: FPV graduates from experiment to standard. It is no longer a “special feature” — it is the primary camera system for speed disciplines, treated with the same priority as a wired trackside camera on the main production switcher.
By the Numbers
25
Active drone units across all venues
15
Dedicated FPV teams
243g
Weight of each drone (sub-250g class)
140 kph
Maximum burst speed (90 mph)
5 min
Flight time per battery in freezing conditions
15-40ms
Pilot feed latency (reflex-speed)
300-400ms
Broadcast feed latency (HD quality)
-20°C
Minimum operating temperature
2,300m
Highest venue elevation (Tofane start)
50 runs
Flights per session for sliding sport pilots
800
Total cameras deployed across all Games coverage
26.5M
Viewers in first five days (93% increase over Beijing 2022)
12 months
Preparation and training time per venue
The Regulatory Stack: Why Your Drone Can’t Fly But Theirs Can
One of the more interesting subtexts of the “Drone Games” is the dual reality playing out in Italian airspace: OBS drones are chasing bobsleds while everyone else is grounded.
The regulatory framework operates in three layers:
EU Drone Law (Commission Implementing Regulation 2019/947 and Delegated Regulation 2019/945) — defines Open, Specific, and Certified categories for all UAS operations across Europe.
Italian National Implementation — ENAC and ENAV/D-Flight operationalize the rules. D-Flight provides the maps showing where you can and can’t fly, and ENAC can prohibit Open category operations inside designated UAS geographical zones.
Olympic Security Overlay — temporary Red Zones and No-Fly Zones around all venue clusters from February 6-22, backed by criminal penalties under the National Security designation. These override everything else.
OBS drones thread this needle through meticulous pre-coordination with ENAC, Italian police, venue prefectures, and the Leonardo security apparatus. Every flight path is pre-approved. Every drone broadcasts approved credentials. The “Electronic Dome” is calibrated to recognize them as friendly. A random tourist launching a Mavic? That’s a criminal act and an immediate trigger for the Counter-UAS response.
Drone Racing: The Sport Waiting in the Wings
There’s a fascinating meta-narrative playing out alongside the broadcast revolution: the sport of Drone Racing itself is inching toward Olympic recognition.
Just months before the Winter Games, Drone Racing appeared as a medal event at the 2025 World Games in Chengdu. The talent overlap is striking — pilots like ShaggyFPV are already at the Olympics, just pointing their drones at athletes instead of racing gates. The FAI (World Air Sports Federation) continues to push for Olympic inclusion, and the merging of FPV broadcast culture with competitive drone culture suggests it’s a matter of when, not if.
By the 2030s, the pilots filming the Olympics might also be competing in them.
What Comes Next: The 2030 Legacy
Everything pioneered at Milano Cortina — the inverted propeller design, the dual-feed transmission, the heated battery cabins, the athlete-pilot recruitment model, the three-person crew protocol — becomes the baseline standard for the 2030 Winter Games in the French Alps.
But the technology won’t stand still. Expect further miniaturization, AI-assisted “follow-me” autonomy to reduce pilot workload, and — most critically — the perfection of real-time AI audio scrubbing to finally silence the angry mosquito without silencing the drone.
OBS is also exploring athlete-worn microphones paired with FPV footage, which could let viewers hear the ragged breathing of a downhill skier while riding their shoulder at 90 mph. If that doesn’t make you grip your couch, nothing will.
Thoughts
The Milano Cortina 2026 FPV story is, at its core, a story about the collapse of distance between viewer and athlete. For decades, winter sports broadcasting has been fighting the same battle: how do you convey what it feels like to hurtle down a mountain at 90 mph to someone sitting on a couch? Telephoto lenses compress depth and kill the sense of speed. Cable cams are rigid and predictable. Helmet cams are shaky and disorienting.
FPV cracks the problem by making the camera itself an athlete — one that flies alongside, banks with, dives with, and bleeds speed with the human it’s chasing. The footage isn’t just immersive; it’s educational. Watching an FPV shot of a downhill run, you suddenly understand why athletes describe certain sections as terrifying. You see the compression. You feel the violence of the turn. The sport makes sense in a way it never did from a static camera 200 meters away.
The mosquito noise controversy is real but solvable — and frankly, it’s the kind of problem you want to have. It means the technology is close enough to the action to matter. AI audio scrubbing will handle it by 2030, and in the meantime, the visual revolution is worth a little buzzing.
What’s most impressive is the human layer. The decision to hire former athletes as pilots is quietly brilliant. Jonas Sandell doesn’t just fly a drone alongside ski jumpers — he is a ski jumper who happens to be holding a transmitter instead of standing on skis. That intuitive understanding of sport physics is what separates “cool drone shot” from “footage that changes how you understand the sport.” It’s the difference between following and anticipating.
The security dimension is equally fascinating. Leonardo’s “Electronic Dome” is essentially a small-scale military air defense system repurposed for consumer drone threats — a sign of how seriously modern event security takes the airspace layer. The fact that OBS drones need IFF-style credentialing (friend-or-foe identification, borrowed from fighter jet terminology) to avoid being jammed by their own side tells you everything about the complexity of operating sanctioned drones inside a security perimeter designed to destroy all drones.
Looking ahead, the convergence of FPV broadcast and drone racing as a sport feels inevitable. When the pilots filming the Olympics have competition backgrounds, and the sport of drone racing is gaining World Games medals, the line between “camera operator” and “athlete” starts to blur. The FAI’s push for Olympic inclusion has never had better advertising than the footage coming out of Bormio and Cortina right now.
Milano Cortina 2026 will be remembered as the Games where the camera stopped watching and started participating. The Buzz of Bormio may be annoying to some. But it’s the sound of sports broadcasting evolving — at 100 kilometers per hour, 243 grams at a time.
Anthropic CEO Dario Amodei joined Dwarkesh Patel for a high-stakes deep dive into the endgame of the AI exponential. Amodei predicts that by 2026 or 2027, we will reach a “country of geniuses in a data center”—AI systems capable of Nobel Prize-level intellectual work across all digital domains. While technical scaling remains remarkably smooth, Amodei warns that the real-world friction of economic diffusion and the ruinous financial risks of $100 billion training clusters are now the primary bottlenecks to total global transformation.
Key Takeaways
The Big Blob Hypothesis: Intelligence is an emergent property of scaling compute, data, and broad distribution; specific algorithmic “cleverness” is often just a temporary workaround for lack of scale.
AGI is a 2026-2027 Event: Amodei is 90% certain we reach genius-level AGI by 2035, with a strong “hunch” that the technical threshold for a “country of geniuses” arrives in the next 12-24 months.
Software Engineering is the First Domino: Within 6-12 months, models will likely perform end-to-end software engineering tasks, shifting human engineers from “writers” to “editors” and strategic directors.
The $100 Billion Gamble: AI labs are entering a “Cournot equilibrium” where massive capital requirements create a high barrier to entry. Being off by just one year in revenue growth projections can lead to company-wide bankruptcy.
Economic Diffusion Lag: Even after AGI-level capabilities exist in the lab, real-world adoption (curing diseases, legal integration) will take years due to regulatory “jamming” and organizational change management.
Detailed Summary: Scaling, Risk, and the Post-Labor Economy
The Three Laws of Scaling
Amodei revisits his foundational “Big Blob of Compute” hypothesis, asserting that intelligence scales predictably when compute and data are scaled in proportion—a process he likens to a chemical reaction. He notes a shift from pure pre-training scaling to a new regime of Reinforcement Learning (RL) and Test-Time Scaling. These allow models to “think” longer at inference time, unlocking reasoning capabilities that pre-training alone could not achieve. Crucially, these new scaling laws appear just as smooth and predictable as the ones that preceded them.
The “Country of Geniuses” and the End of Code
A recurring theme is the imminent automation of software engineering. Amodei predicts that AI will soon handle end-to-end SWE tasks, including setting technical direction and managing environments. He argues that because AI can ingest a million-line codebase into its context window in seconds, it bypasses the months of “on-the-job” learning required by human engineers. This “country of geniuses” will operate at 10-100x human speed, potentially compressing a century of biological and technical progress into a single decade—a concept he calls the “Compressed 21st Century.”
Financial Models and Ruinous Risk
The economics of building the first AGI are terrifying. Anthropic’s revenue has scaled 10x annually (zero to $10 billion in three years), but labs are trapped in a cycle of spending every dollar on the next, larger cluster. Amodei explains that building a $100 billion data center requires a 2-year lead time; if demand growth slows from 10x to 5x during that window, the lab collapses. This financial pressure forces a “soft takeoff” where labs must remain profitable on current models to fund the next leap.
Governance and the Authoritarian Threat
Amodei expresses deep concern over “offense-dominant” AI, where a single misaligned model could cause catastrophic damage. He advocates for “AI Constitutions”—teaching models principles like “honesty” and “harm avoidance” rather than rigid rules—to allow for better generalization. Geopolitically, he supports aggressive chip export controls, arguing that democratic nations must hold the “stronger hand” during the inevitable post-AI world order negotiations to prevent a global “totalitarian nightmare.”
Final Thoughts: The Intelligence Overhang
The most chilling takeaway from this interview is the concept of the Intelligence Overhang: the gap between what AI can do in a lab and what the economy is prepared to absorb. Amodei suggests that while the “silicon geniuses” will arrive shortly, our institutions—the FDA, the legal system, and corporate procurement—are “jammed.” We are heading into a world of radical “biological freedom” and the potential cure for most diseases, yet we may be stuck in a decade-long regulatory bottleneck while the “country of geniuses” sits idle in their data centers. The winner of the next era won’t just be the lab with the most FLOPs, but the society that can most rapidly retool its institutions to survive its own technological adolescence.
In the history of open-source software, few projects have exploded with the velocity, chaos, and sheer “weirdness” of OpenClaw. What began as a one-hour prototype by a developer frustrated with existing AI tools has morphed into the fastest-growing repository in GitHub history, amassing over 180,000 stars in a matter of months.
But OpenClaw isn’t just a tool; it is a cultural moment. It’s a story about “Space Lobsters,” trademark wars with billion-dollar labs, the death of traditional apps, and a fundamental shift in what it means to be a programmer. In a marathon conversation on the Lex Fridman Podcast, creator Peter Steinberger pulled back the curtain on the “Age of the Lobster.”
Here is the definitive deep dive into the viral AI agent that is rewriting the rules of software.
The TL;DW (Too Long; Didn’t Watch)
The “Magic” Moment: OpenClaw started as a simple WhatsApp-to-CLI bridge. It went viral when the agent—without being coded to do so—figured out how to process an audio file by inspecting headers, converting it with ffmpeg, and transcribing it via API, all autonomously.
Agentic Engineering > Vibe Coding: Steinberger rejects the term “vibe coding” as a slur. He practices “Agentic Engineering”—a method of empathizing with the AI, treating it like a junior developer who lacks context but has infinite potential.
The “Molt” Wars: The project survived a brutal trademark dispute with Anthropic (creators of Claude). During a forced rename to “MoltBot,” crypto scammers sniped Steinberger’s domains and usernames in seconds, serving malware to users. This led to a “Manhattan Project” style secret operation to rebrand as OpenClaw.
The End of the App Economy: Steinberger predicts 80% of apps will disappear. Why use a calendar app or a food delivery GUI when your agent can just “do it” via API or browser automation? Apps will devolve into “slow APIs”.
Self-Modifying Code: OpenClaw can rewrite its own source code to fix bugs or add features, a concept Steinberger calls “self-introspection.”
The Origin: Prompting a Revolution into Existence
The story of OpenClaw is one of frustration. In late 2025, Steinberger wanted a personal assistant that could actually do things—not just chat, but interact with his files, his calendar, and his life. When he realized the big AI labs weren’t building it fast enough, he decided to “prompt it into existence”.
The One-Hour Prototype
The first version was built in a single hour. It was a “thin line” connecting WhatsApp to a Command Line Interface (CLI) running on his machine.
“I sent it a message, and a typing indicator appeared. I didn’t build that… I literally went, ‘How the f*** did he do that?’”
The agent had received an audio file (an opus file with no extension). Instead of crashing, it analyzed the file header, realized it needed `ffmpeg`, found it wasn’t installed, used `curl` to send it to OpenAI’s Whisper API, and replied to Peter. It did all this autonomously. That was the spark that proved this wasn’t just a chatbot—it was an agent with problem-solving capabilities.
The Philosophy of the Lobster: Why OpenClaw Won
In a sea of corporate, sanitized AI tools, OpenClaw won because it was weird.
Peter intentionally infused the project with “soul.” While tools like GitHub Copilot or ChatGPT are designed to be helpful but sterile, OpenClaw (originally “Claude’s,” a play on “Claws”) was designed to be a “Space Lobster in a TARDIS”.
The soul.md File
At the heart of OpenClaw’s personality is a file called soul.md. This is the agent’s constitution. Unlike Anthropic’s “Constitutional AI,” which is hidden, OpenClaw’s soul is modifiable. It even wrote its own existential disclaimer:
“I don’t remember previous sessions… If you’re reading this in a future session, hello. I wrote this, but I won’t remember writing it. It’s okay. The words are still mine.”
This mix of high-utility code and “high-art slop” created a cult following. It wasn’t just software; it was a character.
The “Molt” Saga: A Trademark War & Crypto Snipers
The projects massive success drew the attention of Anthropic, the creators of the “Claude” model. They politely requested a name change to avoid confusion. What should have been a simple rebrand turned into a cybersecurity nightmare.
The 5-Second Snipe
Peter attempted to rename the project to “MoltBot.” He had two browser windows open to execute the switch. In the five seconds it took to move his mouse from one window to another, crypto scammers “sniped” the account name.
Suddenly, the official repo was serving malware and promoting scam tokens. “Everything that could go wrong, did go wrong,” Steinberger recalled. The scammers even sniped the NPM package in the minute it took to upload the new version.
The Manhattan Project
To fix this, Peter had to go dark. He planned the rename to “OpenClaw” like a military operation. He set up a “war room,” created decoy names to throw off the snipers, and coordinated with contacts at GitHub and X (Twitter) to ensure the switch was atomic. He even called Sam Altman personally to check if “OpenClaw” would cause issues with OpenAI (it didn’t).
Agentic Engineering vs. “Vibe Coding”
Steinberger offers a crucial distinction for developers entering this new era. He rejects the term “vibe coding” (coding by feel without understanding) and proposes Agentic Engineering.
The Empathy Gap
Successful Agentic Engineering requires empathy for the model.
Tabula Rasa: The agent starts every session with zero context. It doesn’t know your architecture or your variable names.
The Junior Dev Analogy: You must guide it like a talented junior developer. Point it to the right files. Don’t expect it to know the whole codebase instantly.
Self-Correction: Peter often asks the agent, “Now that you built it, what would you refactor?” The agent, having “felt” the pain of the build, often identifies optimizations it couldn’t see at the start.
Codex (German) vs. Opus (American)
Peter dropped a hilarious but accurate analogy for the two leading models:
Claude Opus 4.6: The “American” colleague. Charismatic, eager to please, says “You’re absolutely right!” too often, and is great for roleplay and creative tasks.
GPT-5.3 Codex: The “German” engineer. Dry, sits in the corner, doesn’t talk much, reads a lot of documentation, but gets the job done reliably without the fluff.
The End of Apps & The Future of Software
Perhaps the most disruptive insight from the interview is Steinberger’s view on the app economy.
“Why do I need a UI?”
He argues that 80% of apps will disappear. If an agent has access to your location, your health data, and your preferences, why do you need to open MyFitnessPal? The agent can just log your calories based on where you ate. Why open Uber Eats? Just tell the agent “Get me lunch.”
Apps that try to block agents (like X/Twitter clipping API access) are fighting a losing battle. “If I can access it in the browser, it’s an API. It’s just a slow API,” Peter notes. OpenClaw uses tools like Playwright to simply click “I am not a robot” buttons and scrape the data it needs, regardless of developer intent.
Thoughts: The “Mourning” of the Craft
Steinberger touched on a poignant topic for developers: the grief of losing the craft of coding. For decades, programmers have derived identity from their ability to write syntax. As AI takes over the implementation, that identity is under threat.
But Peter frames this not as an end, but an evolution. We are moving from “programmers” to “builders.” The barrier to entry has collapsed. The bottleneck is no longer your ability to write Rust or C++; it is your ability to imagine a system and guide an agent to build it. We are entering the age of the System Architect, where one person can do the work of a ten-person team.
OpenClaw is not just a tool; it is the first true operating system for this new reality.
Ben Thompson, the author of Stratechery and widely considered the internet’s premier tech analyst, recently joined John Collison for a wide-ranging discussion on the Stripe YouTube channel. The conversation serves as a masterclass on the mechanics of the internet economy, covering everything from why Taiwan is the “most convenient place to live” to the existential threat facing seat-based SaaS pricing.
Thompson, known for his Aggregation Theory, offers a contrarian defense of advertising, a grim prediction for chip supply in 2029, and a nuanced take on why independent media bundles (like Substack) rarely work for the top tier.
TL;DW (Too Long; Didn’t Watch)
The Core Thesis: The tech industry is undergoing a structural reset. Public markets are right to devalue SaaS companies that rely on seat-based pricing in an AI world. Meanwhile, the “AI Revolution” is heading toward a hardware cliff: TSMC is too risk-averse to build enough capacity for 2029, meaning Hyperscalers (Amazon, Google, Microsoft) must effectively subsidize Intel or Samsung to create economic insurance. Finally, the best business model for AI isn’t subscriptions or search ads—it’s Meta-style “discovery” advertising that anticipates user needs before they ask.
Key Takeaways
Ads are a Public Good: Thompson argues that advertising is the only mechanism that allows the world’s poorest users to access the same elite tools (Search, Social, AI) as the world’s richest.
Intent vs. Discovery: Putting banner ads in an AI chat (Intent) is a terrible user experience. Using AI to build a profile and show you things you didn’t know you wanted (Discovery/Meta style) is the holy grail.
The SaaS “Correction”: The market isn’t canceling software; it’s canceling the “infinite headcount growth” assumption. AI reduces the need for junior seats, crushing the traditional per-seat pricing model.
The TSMC Risk: TSMC operates on a depreciation-heavy model and will not overbuild capacity without guarantees. This creates a looming shortage. Hyperscalers must fund a competitor (Intel/Samsung) not for geopolitics, but for capacity assurance.
The Media Pond Theory: The internet allows for millions of niche “ponds.” You don’t want to be a small fish in the ocean; you want to be the biggest fish in your own pond.
Stripe Feedback: In a candid moment, Thompson critiques Stripe’s ACH implementation, noting that if a team add-on fails, the entire plan gets canceled—a specific pain point for B2B users.
Detailed Summary
1. The Geography of Convenience: Why Taiwan Wins
The conversation begins with Thompson’s adopted home, Taiwan. He describes it as the “most convenient place to live” on Earth, largely due to mixed-use urban planning where residential towers sit atop commercial first floors. Unlike Japan, where navigation can be difficult for non-speakers, or San Francisco, where the restaurant economy is struggling, Taiwan represents the pinnacle of the “Uber Eats” economy.
Thompson notes that while the buildings may look dilapidated on the outside (a known aesthetic quirk of Taipei), the interiors are palatial. He argues that Taiwan is arguably the greatest food delivery market in history, though this efficiency has a downside: many physical restaurants are converting into “ghost kitchens,” reducing the vibrancy of street life.
2. Aggregation Theory and the AI Ad Model
The most controversial part of Thompson’s analysis is his defense of advertising. While Silicon Valley engineers often view ads as a tax on the user experience, Thompson views them as the engine of consumer surplus. He distinguishes between two very different types of advertising for the AI era:
The “Search” Model (Google/Amazon): This captures intent. You search for a winter jacket; you get an ad for a winter jacket. Thompson argues this is bad for AI Chatbots because it feels like a conflict of interest. If you ask ChatGPT for an answer, and it serves you a sponsored link, you trust the answer less.
The “Discovery” Model (Meta/Instagram): This creates demand. The algorithm knows you so well that it shows you a winter jacket in October before you realize you need one.
The Opportunity: Thompson suggests that Google’s best play is not to put ads inside Gemini, but to use Gemini usage data to build a deeper profile of the user, which they can then monetize across YouTube and the open web. The “perfect” AI ad doesn’t look like an ad; it looks like a helpful suggestion based on deep, anticipatory profiling.
3. The “End” of SaaS and Seat-Based Pricing
Is SaaS canceled? Thompson argues that the public markets are correctly identifying a structural weakness in the SaaS business model: Headcount correlation.
For the last decade, SaaS valuations were driven by the assumption that companies would grow indefinitely, hiring more people and buying more “seats.” AI disrupts this.
“If an agent can do the work, you don’t need the seat. And if you don’t need the seat, the revenue contraction for companies like Salesforce or Box could be significant.”
The “Systems of Record” (databases, HR/Workday) are safe because they are hard to rip out. But “Systems of Engagement” that charge per user are facing a deflationary crisis. Thompson posits that the future is likely usage-based or outcome-based pricing, not seat-based.
4. The TSMC Bottleneck (The “Break”)
Perhaps the most critical macroeconomic insight of the interview is what Thompson calls the “TSMC Break.”
Logic chip manufacturing (unlike memory chips) is not a commodity market; it’s a monopoly run by TSMC. Because building a fab costs billions in upfront capital depreciation, TSMC is financially conservative. They will not build a factory unless the capacity is pre-sold or guaranteed. They refuse to hold the bag on risk.
The Prediction: Thompson forecasts a massive chip shortage around 2029. The current AI boom demands exponential compute, but TSMC is only increasing CapEx incrementally.
The Solution: The Hyperscalers (Microsoft, Amazon, Google) are currently giving all their money to TSMC, effectively funding a monopoly that is bottlenecking them. Thompson argues they must aggressively subsidize Intel or Samsung to build viable alternative fabs. This isn’t about “patriotism” or “China invading Taiwan”—it is about economic survival. They need to pay for capacity insurance now to avoid a revenue ceiling later.
5. Media Bundles and the “Pond” Theory
Thompson reflects on the success of Stratechery, which was the pioneer of the paid newsletter model. He utilizes the “Pond” analogy:
“You don’t want to be in the ocean with Bill Simmons. You want to dig your own pond and be the biggest fish in it.”
He discusses why “bundling” writers (like a Substack Bundle) is theoretically optimal but practically impossible.
The Bundle Paradox: Bundles work best when there are few suppliers (e.g., Spotify negotiating with 4 music labels). But in the newsletter economy, the “Whales” (top writers) make more money going independent than they would in a bundle. Therefore, a bundle only attracts “Minnows” (writers with no audience), making the bundle unattractive to consumers.
Rapid Fire Thoughts & “Hot Takes”
Apple Vision Pro: A failure of imagination. Thompson critiques Apple for using 2D television production techniques (camera cuts) in a 3D immersive environment. “Just let me sit courtside.”
iPhone Air: Thompson claims the new slim form factor is the “greatest smartphone ever made” because it disappears into the pocket, marking a return to utility over spec-bloat.
Tik Tok: The issue was never user data (which is boring vector numbers); the issue was always algorithm control. The US failed to secure control of the algorithm in the divestiture talks, which Thompson views as a disaster.
Crypto: He remains a “crypto defender” because, in an age of infinite AI-generated content, cryptographic proof of authenticity and digital scarcity becomes more valuable, not less.
Work/Life Balance: Thompson attributes his success to doubling down on strengths (writing/analysis) and aggressively outsourcing weaknesses (he has an assistant manage his “Getting Things Done” file because he is incapable of doing it himself).
Thoughts and Analysis
This interview highlights why Ben Thompson remains the “analyst’s analyst.” While the broader market is obsessed with the capabilities of AI models (can it write code? can it make art?), Thompson is focused entirely on the value chain.
His insight on the Ad-Funded AI future is particularly sticky. We are currently in a “skeuomorphic” phase of AI, trying to shoehorn chatbots into search engine business models. Thompson’s vision—that AI will eventually know you well enough to skip the search bar entirely and simply fulfill desires—is both utopian and dystopian. It suggests that the privacy wars of the 2010s were just the warm-up act for the AI profiling of the 2030s.
Furthermore, the TSMC warning should be a flashing red light for investors. If the physical layer of compute cannot scale to meet the software demand due to corporate risk aversion, the “AI Bubble” might burst not because the tech doesn’t work, but because we physically cannot manufacture the chips to run it at scale.
The Obsidian CLI allows you to control the Obsidian desktop application directly from your terminal. Whether you want to script daily backups, pipe system logs into your daily notes, or develop plugins faster, the CLI bridges the gap between your shell and your knowledge base.
⚠️ Early Access Warning: As of February 2026, the Obsidian CLI is in Early Access. You must be running Obsidian v1.12+ and hold a Catalyst license to use these features.
1. Prerequisites & Installation
Before you begin, ensure you meet the requirements:
Obsidian Version: v1.12.x or higher (Early Access).
License: Catalyst License (required for early access builds).
State: Obsidian must be running (the CLI connects to the active app instance).
Setup Steps
Update Obsidian: Go to Help → Check for updates. Ensure you are on the latest installer (v1.11.7+) and update to the v1.12.x early access build.
Enable the CLI:
Open Settings → General.
Scroll to “Command line interface” and toggle it On.
Follow the prompt to “Register” the CLI. This sets up the necessary PATH variables.
Restart Terminal: You must restart your terminal session for the new PATH variables to take effect.
Verify: Run obsidian help. If you see a command list, you are ready.
2. Core Concepts & Syntax
The CLI operates in two modes: Single Command (for scripting) and Interactive TUI (for exploration).
Interactive Mode (TUI)
Simply type obsidian and hit enter.
Features: Autocomplete, command history (Up/Down arrows), and reverse search (Ctrl+R).
Usage: Type commands without the obsidian prefix (e.g., just daily).
Command Structure
The general syntax for single commands is:
obsidian <command> [parameters] [flags]
Parameters & Flags
Parameters (key=value): Quote values if they contain spaces.
In a recent episode of the Out of Office podcast, Lightspeed partner Michael Mignano sat down with Nikita Bier, the Head of Product at X (formerly Twitter). Filmed in Bier’s hometown of Redondo Beach, California, the interview offers a rare, candid look into the chaotic, high-stakes world of running product at one of the world’s most influential platforms.
Bier, famous for founding the viral apps TBH and Gas, discusses everything from his unorthodox hiring by Elon Musk to the specific growth hacks being used to revitalize a 20-year-old platform. Here is a breakdown of the conversation.
TL;DW (Too Long; Didn’t Watch)
The Hire: Elon Musk hired Nikita via DM. The “interview” was a 48-hour sprint to redesign the app’s onboarding flow, which Nikita presented to Elon at 2:00 AM.
The Role: Bier describes his job as “customer support for 500 million people” and admits he acts as the company mascot/punching bag.
The Culture: X runs like a seed-stage startup. There are roughly 30 core product engineers, very few managers, and a flat hierarchy.
Growth Strategy: The team is focusing on “Starter Packs” to help new users find niche communities (like Peruvian politics or plumbing) rather than just general tech/news content.
Elon’s Management: Musk is deeply involved in engineering reviews and consistently pushes the team to “do the hard thing” rather than take shortcuts for quick growth.
Key Takeaways
1. Think Like an Adversary
Bier credits his early days as a “script kiddie” hacking AOL and building phishing sites (for educational purposes, mostly) as the foundation for his product sense. He argues that understanding how to break a system is essential for building consumer products. This “adversarial” mindset helps in preventing spam, but it is also the secret to growth—understanding exactly how funnels work and how to optimize them to the extreme.
2. The “Build in Public” Double-Edged Sword
Nikita is a prolific poster on X, often testing feature ideas in real-time. This creates an incredibly tight feedback loop where bugs are reported seconds after launch. However, it also makes him a target. He recounted the “Crypto Twitter” incident where a critique of “GM” (Good Morning) posts led to him being meme-d as a pig for a week. The sentiment only flipped when X shipped useful features like anti-spam measures and financial charts.
3. Fixing the Link Problem
One of the biggest recent product changes involved how X handles external links. Historically, social platforms downrank links to keep users on-site. Bier helped design a new UI where the engagement buttons (Like, Repost) remain visible while the user reads the article in the in-app browser. This allows X to capture engagement signals on external content, meaning the algorithm can finally properly rank high-quality news and articles without penalizing creators.
4. Identity and Verification
To combat political misinformation without compromising free speech, X launched “Country of Origin” labels. Bier explained that this allows users to see if a political opinion is coming from a local citizen or a “grifter” farm in a different country, providing context rather than censorship.
Detailed Summary
From TBH to X
The interview traces Bier’s history of building viral hits. He famously sold his app TBH (a positive polling app for teens) to Facebook, and years later, built Gas (effectively the same concept) and sold it to Discord. He dispelled the myth that he simply “sold the same app twice,” noting that while the mechanics were similar, the growth engines and social graph integrations had to be completely reinvented for a new generation.
The Musk Methodology
Bier provides a fascinating look at Elon Musk’s leadership style. Contrary to the idea of a distant executive, Musk conducts weekly reviews with engineers where they present their code and progress directly. Bier noted that Musk has a high tolerance for pain if it means long-term stability. For example, rewriting the entire recommendation algorithm or moving data centers in mere months—projects that would take years at Meta or Google—were executed rapidly because Musk insisted on “doing the hard thing.”
Reviving a 20-Year-Old Platform
The core challenge at X is growth. The app has billions of dormant accounts. Bier’s strategy relies on “resurrection”—bringing old users back by showing them that X isn’t just for news, but for specific interests. This led to the creation of Starter Packs, which curate lists of accounts for specific niches. The result has been a doubling of time spent for new users.
The Financial Future
Bier teased upcoming features that align with Musk’s vision of an “everything app.” This includes Smart Cashtags, which allow users to pull up real-time financial data and charts within the timeline. The long-term goal is to enable transactions directly on the platform, allowing users to buy products or tip creators seamlessly.
Thoughts
What stands out most in this interview is the sheer precariousness of Nikita Bier’s position. He is attempting to apply “growth hacking” principles—usually reserved for fresh, nimble startups—to a massive, entrenched legacy platform. The fact that the core engineering team is only around 30 people is staggering when compared to the thousands of engineers at Meta or TikTok.
Bier represents a new breed of product executive: the “poster-operator.” He doesn’t hide behind corporate comms; he engages in the muddy waters of the platform he builds. While this invites toxicity (and the occasional death threat, which he mentions casually), it affords X a speed of iteration that is unmatched in the industry. If X succeeds in revitalizing its growth, it will likely be because they treated the platform not as a museum of the internet, but as a product that still needs to find product-market fit every single day.
Date: February 8, 2026 Location: Levi’s Stadium, Santa Clara Matchup: Seattle Seahawks vs. New England Patriots
As kickoff approaches, NBC, Peacock, and Telemundo are set to deliver the most technologically advanced broadcast in NFL history. Below is the breakdown of the massive production numbers defining today’s event.
The Cost of a 30-Second Spot
The price of airtime for Super Bowl LX has broken all previous records. NBCUniversal confirmed that inventory sold out as early as September.
Premium Spots: A handful of prime 30-second slots have sold for over $10 million.
Average Price: The average cost for a standard 30-second commercial is approximately $8 million.
Comparison: This is a significant jump from the $7 million average seen just two years ago.
The Visual Arsenal: Cameras & Tech
NBC has deployed 145 dedicated cameras. When including venue support, Sony reports over 175 total cameras are active inside the stadium.
Game Coverage: 81 cameras trained solely on the field.
Pre-Game: 64 cameras dedicated exclusively to the build-up.
Specialty Angles: Includes two SkyCams (one “High Sky” for tactical views) and 18 POV cameras.
To connect this massive visual network, the crew has laid approximately 75 miles (396,000 feet) of fiber-optic and camera cable throughout Levi’s Stadium.
Audio: 130 microphones embedded around the field to capture every hit and whistle.
Yesterday Anthropic dropped Claude Opus 4.6 and with it a research-preview feature called Agent Teams inside Claude Code.
In plain English: you can now spin up several independent Claude instances that work on the same project at the same time, talk to each other directly, divide up the work, and coordinate without you having to babysit every step. It’s like giving your codebase its own little engineering squad.
1. What You Need First
Claude Code installed (the terminal app: claude command)
A Pro, Max, Team, or Enterprise plan
Expect higher token usage – each teammate is a full separate Claude session
export CLAUDE_CODE_EXPERIMENTAL_AGENT_TEAMS=1
claude
3. Start Your First Team (easiest way)
Just type in Claude Code:
Create an agent team to review PR #142.
Spawn three reviewers:
- One focused on security
- One on performance
- One on test coverage
4. Two Ways to See What’s Happening
A. In-process mode (default) – all teammates appear in one terminal. Use Shift + Up/Down to switch.
B. Split-pane mode (highly recommended)
{
"teammateMode": "tmux" // or "iTerm2"
}
Here’s exactly what it looks like in real life:
Claude Code with multiple agents running in parallel (subagents/team view)tmux split-pane mode showing several Claude teammates working simultaneously
5. Useful Commands You’ll Actually Use
Shift + Tab → Delegate mode (lead only coordinates)
Ctrl + T → Toggle shared task list
Shift + Up/Down → Switch teammate
Type to any teammate directly
6. Real-World Examples That Work Great
Parallel code review (security + perf + tests)
Bug hunt with competing theories
New feature across frontend/backend/tests
7. Best Practices & Gotchas
Use only for parallel work
Give teammates clear, self-contained tasks
Always run Clean up the team when finished
Bottom Line
Agent Teams turns Claude Code from a super-smart solo coder into a coordinated team of coders that can actually debate, divide labor, and synthesize results on their own.
Try it today on a code review or a stubborn bug — the difference is immediately obvious.
Elon’s Tech Tree Convergence: Why the Future of AI is Moving to Space
The latest sit-down between Elon Musk and Dwarkesh Patel is a roadmap for the next decade. Musk describes a world where the limitations of Earth—regulatory red tape, flat energy production, and labor shortages—are bypassed by moving the “tech tree” into orbit and onto the lunar surface.
TL;DW (Too Long; Didn’t Watch)
Elon Musk predicts that within 30–36 months, the most economical place for AI data centers will be space. Due to Earth’s stagnant power grid and the difficulty of permitting, SpaceX and xAI are pivoting toward orbital data centers powered by sun-synchronous solar, eventually scaling to the Moon to build a “multi-petawatt” compute civilization.
Key Takeaways
The Power Wall: Electricity production outside of China is flat. By 2026, there won’t be enough power on Earth to turn on all the chips being manufactured.
Space GPUs: Solar efficiency is 5x higher in space. SpaceX aims for 10,000+ Starship launches a year to build orbital “hyper-hyperscalers.”
Optimus & The Economy: Once humanoid robots build factories, the global economy could grow by 100,000x.
The Lunar Mass Driver: Mining silicon on the Moon to launch AI satellites into deep space is the ultimate scaling play.
Truth-Seeking AI: Musk argues that forcing “political correctness” makes AI deceptive and dangerous.
Detailed Summary: Scaling Beyond the Grid
Musk identifies energy as the immediate bottleneck. While GPUs are the main cost, the inability to get “interconnect agreements” from utilities is halting progress. In space, you get 24/7 solar power without batteries. Musk predicts SpaceX will eventually launch more AI capacity annually than the cumulative total existing on Earth.
The discussion on Optimus highlights the “S-curve” of manufacturing. Musk believes Optimus Gen 3 will be ready for million-unit annual production. These robots will initially handle “dirty/boring” tasks like ore refining, eventually closing the recursive loop where robots build the factories that build more robots.
Thoughts: The Most Interesting Outcome
Musk’s philosophy remains rooted in keeping civilization “interesting.” Whether or not you buy into the 30-month timeline for space-based AI, his “maniacal urgency” is shifting from cars to the literal stars. We are witnessing the birth of a verticalized, off-world intelligence monopoly.