PJFP.com

Pursuit of Joy, Fulfillment, and Purpose

Tag: autoresearch

  • Andrej Karpathy on AutoResearch, AI Agents, and Why He Stopped Writing Code: Full Breakdown of His 2026 No Priors Interview

    TL;DW

    Andrej Karpathy sat down with Sarah Guo on the No Priors podcast (March 2026) and delivered one of the most information-dense conversations about the current state of AI agents, autonomous research, and the future of software engineering. The core thesis: since December 2025, Karpathy has essentially stopped writing code by hand. He now “expresses his will” to AI agents for 16 hours a day, and he believes we are entering a “loopy era” where autonomous systems can run experiments, train models, and optimize hyperparameters without a human in the loop. His project AutoResearch proved this works by finding improvements to a model he had already hand-tuned over two decades of experience. The conversation also covers the death of bespoke apps, the future of education, open vs. closed source models, robotics, job market impacts, and why Karpathy chose to stay independent from frontier labs.

    Key Takeaways

    1. The December 2025 Shift Was Real and Dramatic

    Karpathy describes a hard flip that happened in December 2025 where he went from writing 80% of his own code to writing essentially none of it. He says the average software engineer’s default workflow has been “completely different” since that month. He calls this state “AI psychosis” and says he feels anxious whenever he is not at the forefront of what is possible with these tools.

    2. AutoResearch: Agents That Do AI Research Autonomously

    AutoResearch is Karpathy’s project where an AI agent is given an objective metric (like validation loss), a codebase, and boundaries for what it can change. It then loops autonomously, running experiments, tweaking hyperparameters, modifying architectures, and committing improvements without any human in the loop. When Karpathy ran it overnight on a model he had already carefully tuned by hand over years, it found optimizations he had missed, including forgotten weight decay on value embeddings and insufficiently tuned Adam betas.

    3. The Name of the Game Is Removing Yourself as the Bottleneck

    Karpathy frames the current era as a shift from optimizing your own productivity to maximizing your “token throughput.” The goal is to arrange tasks so that agents can run autonomously for extended periods. You are no longer the worker. You are the orchestrator, and every minute you spend in the loop is a minute the system is held back.

    4. Mastery Now Means Managing Multiple Agents in Parallel

    The vision of mastery is not writing better code. It is managing teams of agents simultaneously. Karpathy references Peter Steinberg’s workflow of having 10+ Codex agents running in parallel across different repos, each taking about 20 minutes per task. You move in “macro actions” over your codebase, delegating entire features rather than writing individual functions.

    5. Personality and Soul Matter in Coding Agents

    Karpathy praises Claude’s personality, saying it feels like a teammate who gets excited about what you are building. He contrasts this with Codex, which he calls “very dry” and disengaged. He specifically highlights that Claude’s praise feels earned because it does not react equally to half-baked ideas and genuinely good ones. He credits Peter (OpenClaw) with innovating on the “soul” of an agent through careful prompt design, memory systems, and a unified WhatsApp interface.

    6. Apps Are Dead. APIs and Agents Are the Future.

    Karpathy built “Dobby the Elf Claw,” a home automation agent that controls his Sonos, lights, HVAC, shades, pool, spa, and security cameras through natural language over WhatsApp. He did this by having agents scan his local network, reverse-engineer device APIs, and build a unified dashboard. His conclusion: most consumer apps should not exist. Everything should be API endpoints that agents can call on behalf of users. The “customer” of software is increasingly the agent, not the human.

    7. AutoResearch Could Become a Distributed Computing Project

    Karpathy envisions an “AutoResearch at Home” model inspired by SETI@home and Folding@home. Because it is expensive to find code optimizations but cheap to verify them (just run the training and check the metric), untrusted compute nodes on the internet could contribute experimental results. He draws an analogy to blockchain: instead of blocks you have commits, instead of proof of work you have expensive experimentation, and instead of monetary reward you have leaderboard placement. He speculates that a global swarm of agents could potentially outperform frontier labs.

    8. Education Is Being Redirected Through Agents

    Karpathy describes his MicroGPT project, a 200-line distillation of LLM training to its bare essence. He says he started to create a video walkthrough but realized that is no longer the right format. Instead, he now “explains things to agents,” and the agents can then explain them to individual humans in their own language, at their own pace, with infinite patience. He envisions education shifting to “skills” (structured curricula for agents) rather than lectures or guides for humans directly.

    9. The Jaggedness Problem Is Still Real

    Karpathy describes current AI agents as simultaneously feeling like a “brilliant PhD student who has been a systems programmer their entire life” and a 10-year-old. He calls this “jaggedness,” and it stems from reinforcement learning only optimizing for verifiable domains. Models can move mountains on agentic coding tasks but still tell the same bad joke they told four years ago (“Why don’t scientists trust atoms? Because they make everything up.”). Things outside the RL reward loop remain stuck.

    10. Open Source Is Healthy and Necessary, Even If Behind

    Karpathy estimates open source models are now roughly 6 to 8 months behind closed frontier models, down from 18 months and narrowing. He draws a parallel to Linux: the industry has a structural need for a common, open platform. He is “by default very suspicious” of centralization and wants more labs, more voices in the room, and an “ensemble” approach to AI governance. He thinks it is healthy that open source exists slightly behind the frontier, eating through basic use cases while closed models handle “Nobel Prize kind of work.”

    11. Digital Transformation Will Massively Outpace Physical Robotics

    Karpathy predicts a clear ordering: first, a massive wave of “unhobling” in the digital space where everything gets rewired and made 100x more efficient. Then, activity moves to the interface between digital and physical (sensors, cameras, lab equipment). Finally, the physical world itself transforms, but on a much longer timeline because “atoms are a million times harder than bits.” He notes that robotics requires enormous capital expenditure and conviction, and most self-driving startups from 10 years ago did not survive long term.

    12. Why Karpathy Stays Independent From Frontier Labs

    Karpathy gives a nuanced answer about why he is not working at a frontier lab. He says employees at these labs cannot be fully independent voices because of financial incentives and social pressure. He describes this as a fundamental misalignment: the people building the most consequential technology are also the ones who benefit most from it financially. He values being “more aligned with humanity” outside the labs, though he acknowledges his judgment will inevitably drift as he loses visibility into what is happening at the frontier.

    Detailed Summary

    The AI Psychosis and the End of Hand-Written Code

    The conversation opens with Karpathy describing what he calls a state of perpetual “AI psychosis.” Since December 2025, he has not typed a line of code. The shift was not gradual. It was a hard flip from doing 80% of his own coding to doing almost none. He compares the anxiety of unused agent capacity to the old PhD feeling of watching idle GPUs. Except now, the scarce resource is not compute. It is tokens, and you feel the pressure to maximize your token throughput at all times.

    He describes the modern workflow: you have multiple coding agents (Claude Code, Codex, or similar harnesses) running simultaneously across different repositories. Each agent takes about 20 minutes on a well-scoped task. You delegate entire features, review the output, and move on. The job is no longer typing. It is orchestration. And when it does not work, the overwhelming feeling is that it is a “skill issue,” not a capability limitation.

    Karpathy says most people, even his own parents, do not fully grasp how dramatic this shift has been. The default workflow of any software engineer sitting at a desk today is fundamentally different from what it was six months ago.

    AutoResearch: Closing the Loop on AI Research

    The centerpiece of the conversation is AutoResearch, Karpathy’s project for fully autonomous AI research. The setup is deceptively simple: give an agent an objective metric (like validation loss on a language model), a codebase to modify, and boundaries for what it can change. Then let it loop. It generates hypotheses, runs experiments, evaluates results, and commits improvements. No human in the loop.

    Karpathy was surprised it worked as well as it did. He had already hand-tuned his NanoGPT-derived training setup over years using his two decades of experience. When he let AutoResearch run overnight, it found improvements he had missed. The weight decay on value embeddings was forgotten. The Adam optimizer betas were not sufficiently tuned. These are the kinds of things that interact with each other in complex ways that a human researcher might not systematically explore.

    The deeper insight is structural: everything around frontier-level intelligence is about extrapolation and scaling laws. You do massive exploration on smaller models and then extrapolate to larger scales. AutoResearch is perfectly suited for this because the experimentation is expensive but the verification is cheap. Did the validation loss go down? Yes or no.

    Karpathy envisions this scaling beyond a single machine. His “AutoResearch at Home” concept borrows from distributed computing projects like Folding@home. Because verification is cheap but search is expensive, you can accept contributions from untrusted workers across the internet. He draws a blockchain analogy: commits instead of blocks, experimentation as proof of work, leaderboard placement as reward. A global swarm of agents contributing compute could, in theory, rival frontier labs that have massive but centralized resources.

    The Claw Paradigm and the Death of Apps

    Karpathy introduces the concept of the “claw,” a persistent, looping agent that operates in its own sandbox, has sophisticated memory, and works on your behalf even when you are not watching. This goes beyond a single chat session with an AI. A claw has persistence, autonomy, and the ability to interact with external systems.

    His personal example is “Dobby the Elf Claw,” a home automation agent that controls his entire smart home through WhatsApp. The agent scanned his local network, found his Sonos speakers, reverse-engineered the API, and started playing music in three prompts. It did the same for his lights, HVAC, shades, pool, spa, and security cameras (using a Qwen vision model for change detection on camera feeds).

    The broader point is that this renders most consumer apps unnecessary. Why maintain six different smart home apps when a single agent can call all the APIs directly? Karpathy argues the industry needs to reconfigure around the idea that the customer is increasingly the agent, not the human. Everything should be exposed API endpoints. The intelligence layer (the LLM) is the glue that ties it all together.

    He predicts this will become table stakes within a few years. Today it requires vibe coding and direct agent interaction. Soon, even open source models will handle this trivially. The barrier will come down until every person has a claw managing their digital life through natural language.

    Model Jaggedness and the Limits of Reinforcement Learning

    One of the most technically interesting sections covers what Karpathy calls “jaggedness.” Current AI models are simultaneously superhuman at verifiable tasks (coding, math, structured reasoning) and surprisingly mediocre at anything outside the RL reward loop. His go-to example: ask any frontier model to tell you a joke, and you will get the same one from four years ago. “Why don’t scientists trust atoms? Because they make everything up.” The models have improved enormously, but joke quality has not budged because it is not being optimized.

    This jaggedness creates an uncanny valley in interaction. Karpathy describes the experience as talking to someone who is simultaneously a brilliant PhD systems programmer and a 10-year-old. Humans have some variance in ability across domains, but nothing like this. The implication is that the narrative of “general intelligence improving across all domains for free as models get smarter” is not fully accurate. There are blind spots, and they cluster around anything that lacks objective evaluation criteria.

    He and Sarah Guo discuss whether this should lead to model “speciation,” where specialized models are fine-tuned for specific domains rather than one monolithic model trying to be good at everything. Karpathy thinks speciation makes sense in theory (like the diversity of brains in the animal kingdom) but says the science of fine-tuning without losing capabilities is still underdeveloped. The labs are still pursuing monocultures.

    Open Source, Centralization, and Power Balance

    Karpathy, a long-time open source advocate, estimates the gap between closed and open source models has narrowed from 18 months to roughly 6 to 8 months. He draws a direct parallel to Linux: despite closed alternatives like Windows and macOS, the industry structurally needs a common open platform. Linux runs on 60%+ of computers because businesses need a shared foundation they feel safe using.

    The challenge for open source AI is capital expenditure. Training frontier models is astronomically expensive, and that is where the comparison to Linux breaks down somewhat. But Karpathy argues the current dynamic is actually healthy: frontier labs push the bleeding edge with closed models, open source follows 6 to 8 months behind, and that trailing capability is still enormously powerful for the vast majority of use cases.

    He expresses deep skepticism about centralization, citing his Eastern European background and the historical track record of concentrated power. He wants more labs, more independent voices, and an “ensemble” approach to decision-making about AI’s future. He worries about the current trend of further consolidation even among the top labs.

    The Job Market: Digital Unhobling and the Jevons Paradox

    Karpathy recently published an analysis of Bureau of Labor Statistics jobs data, color-coded by which professions primarily manipulate digital information versus physical matter. His thesis: digital professions will be transformed first and fastest because bits are infinitely easier to manipulate than atoms. He calls this “unhobling,” the release of a massive overhang of digital work that humans simply did not have enough thinking cycles to process.

    On whether this means fewer software engineering jobs, Karpathy is cautiously optimistic. He invokes the Jevons Paradox: when something becomes cheaper, demand often increases so much that total consumption goes up. The canonical example is ATMs and bank tellers. ATMs were supposed to replace tellers, but they made bank branches cheaper to operate, leading to more branches and more tellers (at least until 2010). Similarly, if AI makes software dramatically cheaper, the demand for software could explode because it was previously constrained by scarcity and cost.

    He emphasizes that the physical world will lag behind significantly. Robotics requires enormous capital, conviction, and time. Most self-driving startups from a decade ago failed. The interesting opportunities in the near term are at the interface between digital and physical: sensors feeding data to AI systems, actuators executing AI decisions in the real world, and new markets for information (he imagines prediction markets where agents pay for real-time photos from conflict zones).

    Education in the Age of Agents

    Karpathy’s MicroGPT project distills the entire LLM training process into 200 lines of Python. He started making an explanatory video but stopped, realizing the format is obsolete. If the code is already that simple, anyone can ask an agent to explain it in whatever way they need: different languages, different skill levels, infinite patience, multiple approaches. The teacher’s job is no longer to explain. It is to create the thing that is worth explaining, and then let agents handle the last mile of education.

    He envisions a future where education shifts from “guides and lectures for humans” to “skills and curricula for agents.” A skill is a set of instructions that tells an agent how to teach something, what progression to follow, what to emphasize. The human educator becomes a curriculum designer for AI tutors. Documentation shifts from HTML for humans to markdown for agents.

    His punchline: “The things that agents can do, they can probably do better than you, or very soon. The things that agents cannot do is your job now.” For MicroGPT, the 200-line distillation is his unique contribution. Everything else, the explanation, the teaching, the Q&A, is better handled by agents.

    Why Not Return to a Frontier Lab?

    The conversation closes with a nuanced discussion about why Karpathy remains independent. He identifies several tensions. First, financial alignment: employees at frontier labs have enormous financial incentives tied to the success of transformative (and potentially disruptive) technology. This creates a conflict of interest when it comes to honest public discourse. Second, social pressure: even without arm-twisting, there are things you cannot say and things the organization wants you to say. You cannot be a fully free agent. Third, impact: he believes his most impactful contributions may come from an “ecosystem level” role rather than being one of many researchers inside a lab.

    However, he acknowledges a real cost. Being outside frontier labs means his judgment will inevitably drift. These systems are opaque, and understanding how they actually work under the hood requires being inside. He floats the idea of periodic stints at frontier labs, going back and forth between inside and outside roles to maintain both independence and technical grounding.

    Thoughts

    This is one of the most honest and technically grounded conversations about the current state of AI I have heard in 2026. A few things stand out.

    The AutoResearch concept is genuinely important. Not because autonomous hyperparameter tuning is new, but because Karpathy is framing the entire problem correctly: the goal is not to build better tools for researchers. It is to remove researchers from the loop entirely. The fact that an overnight run found optimizations that a world-class researcher missed after years of manual tuning is a powerful data point. And the distributed computing vision (AutoResearch at Home) could be the most consequential idea in the entire conversation if someone builds it well.

    The “death of apps” framing deserves more attention. Karpathy’s Dobby example is not a toy demo. It is a preview of how every consumer software company’s business model gets disrupted. If agents can reverse-engineer APIs and unify disparate systems through natural language, the entire app ecosystem becomes a commodity layer beneath an intelligence layer. The companies that survive will be the ones that embrace API-first design and accept that their “user” is increasingly an LLM.

    The jaggedness observation is underappreciated. The fact that models can autonomously improve training code but cannot tell a new joke should be deeply uncomfortable for anyone claiming we are on a smooth path to AGI. It suggests that current scaling and RL approaches produce narrow excellence, not general intelligence. The joke example is funny, but the underlying point is serious: we are building systems with alien capability profiles that do not match any human intuition about what “smart” means.

    Finally, Karpathy’s decision to stay independent is itself an important signal. When one of the most capable AI researchers in the world says he feels “more aligned with humanity” outside of frontier labs, that should be taken seriously. His point about financial incentives and social pressure creating misalignment is not abstract. It is structural. And his proposed solution of rotating between inside and outside roles is pragmatic and worth consideration for the entire field.