PJFP.com

Pursuit of Joy, Fulfillment, and Purpose

Tag: ChatGPT

  • Jensen Huang at Stanford CS153 Frontier Systems on Co-Design, Agentic Computing, Vera Rubin, Open Models, and the Million-X Decade That Reshaped AI Infrastructure

    https://www.youtube.com/watch?v=tsQB0n0YV3k

    NVIDIA CEO Jensen Huang returned to Stanford for the CS153 Frontier Systems class (the room nicknamed itself “AI Coachella”) to lay out, in raw form, how he thinks about the computer being reinvented for the first time in over sixty years. Across roughly seventy minutes of student questions he walks through the codesign philosophy that gave NVIDIA a million-x decade, the architectural through-line from Hopper to Grace Blackwell to Vera Rubin to Feynman, the case for open source foundation models, the realities of tokens per watt and MFU, energy demand running a thousand times higher, the China and export-control debate, and his own biggest strategic mistakes. Watch the full conversation on YouTube.

    TLDW

    Huang argues every layer of computing has changed: the programming model, the system architecture, the deployment pattern, the economics. Co-design across CPUs, GPUs, networking, storage, switches and compilers gave NVIDIA roughly a million-x speed-up over ten years versus the ten-x Moore’s Law era, and that headroom is what let researchers say “just train on the whole internet.” Hopper was built for pre-training, Grace Blackwell NVLink72 for inference and reasoning (50x over Hopper in two years), Vera Rubin is built for agents that load long memory, call tools and need a low-latency single-threaded CPU bolted directly to the GPU, and Feynman extends that to swarms of agents that spawn sub-agents. Open weights matter because safety, sovereignty (230-plus languages no one else will fund) and domain models for biology, autonomy, robotics and climate need a foundation that NVIDIA is willing to seed. Compute is not really the scarce resource (Huang says place the order and the chips ship), the broken thing is institutional budgeting that can’t put a billion dollars into a shared university supercomputer. Energy demand is heading a thousand times higher and this is finally the moment market forces alone will fund sustainable generation. On geopolitics he rejects the GPUs-as-atomic-bombs framing and warns America will end up like its telecom industry if it cedes two thirds of the world. On career he advises seeking suffering on purpose. On strategy he says observe, reason from first principles, build a mental model, work backwards, minimize opportunity cost, maximize optionality.

    Key Takeaways

    • The computing model has been substantially unchanged since the IBM System 360, sixty-plus years ago. Huang’s first computer architecture book was the System 360 manual. AI is the first true reinvention.
    • Old computing was pre-recorded retrieval. New computing is generated, contextually aware and continuous. Cloud was on-demand. Agentic systems run continuously.
    • Codesign is NVIDIA’s central thesis. Inherited from the Hennessy and Patterson RISC era at Stanford, extended across CPUs, GPUs, networking, switches, storage, compilers and frameworks all optimized together.
    • The result of full-stack codesign: roughly 1,000,000x faster compute over ten years, versus a generous 10x to 100x for Moore’s Law in the same period. Dennard scaling effectively ended a decade ago.
    • That million-x speed-up is what unlocked “train on all of the internet” as a realistic AI strategy.
    • After GPT, Huang says it was obvious thinking was next. Reasoning is just generating tokens consumed internally, then using tools is generating tokens consumed externally. Agentic systems followed predictably.
    • Education needs AI baked into the curriculum, not just taught as a subject. Pre-recorded textbooks cannot keep pace with knowledge being generated in real time.
    • Huang says he cannot learn anymore without AI. He has the AI read the paper, then read every related paper, then become a dedicated researcher he can interrogate.
    • Mead and Conway and the first-principles methodology of semiconductor design are still worth learning even though most of the scaling tricks have been exhausted.
    • NVIDIA itself is one of the largest consumers of Anthropic and OpenAI tokens in the world. One hundred percent of NVIDIA engineers are now agentically supported. Huang recommends Claude and similar tools by name and says open-source downloads will not match the integrated product harness.
    • NVIDIA still invests heavily in open foundation models because language and intelligence represent the codification of human knowledge. Five pillars: Nemotron (language), BioNeMo (biology), Alphamayo (autonomous vehicles), Groot (humanoid robotics) and a climate science model (mesoscale multiphysics).
    • Sovereign language models matter. Roughly 230 world languages will never be a top priority for a commercial frontier lab. Nemotron is near-frontier and fully fine-tunable so any country can adapt it.
    • Safety and security require open weights. You cannot defend against or audit a black box. Transparent systems let researchers interrogate models and let defenders deploy swarms.
    • The future of cyber defense is not bigger-model-versus-bigger-model. It is trillions of cheap fast small models like Nemotron Nano surrounding the threat.
    • Domain models fuse language priors with world models. Alphamayo learned to drive safely on a few million miles instead of billions because it can reason like a human about the road.
    • MFU (Model Flops Utilization) is a misleading metric. Huang says he wants low MFU, because that means he over-provisioned every resource and never gets pinned by Amdahl’s law during a spike.
    • The xAI Memphis cluster running at 11 percent MFU is not necessarily a failure mode. In disaggregated prefill plus decode inference you can deliver very high tokens per watt with very low MFU.
    • The right metric is performance, ultimately tokens per watt as a proxy for intelligence per watt, and even that needs adjustment because not all tokens are equal. Coding tokens are worth more than other tokens.
    • Hopper was designed for pre-training. NVIDIA chose to build multi-billion-dollar systems when the largest existing scientific supercomputer cost $350 million, with no proven customer base. It worked.
    • Grace Blackwell NVLink72 was designed for inference, especially the high-memory-bandwidth decode phase. It is the world’s first rack-scale computer and delivered a 50x speed-up over Hopper in two years, against an expected 2x from Moore’s Law.
    • Vera Rubin is designed for agents. Long-term memory wired into storage and into the GPU fabric, working memory, heavy tool use, and Vera, a CPU optimized for low-latency multi-core single-threaded code so a multi-billion-dollar GPU system does not stall waiting on a slow tool call.
    • Feynman is being shaped for swarms of agents with sub-agents and sub-sub-agents, a recursive software topology that demands a new compute pattern.
    • Tokens per watt improved 50x in one generation. Compounding energy efficiency is the lever NVIDIA controls directly.
    • Total compute energy demand is heading roughly a thousand times higher than today, possibly two orders of magnitude beyond that. Huang says he would not be surprised if the estimate is low.
    • For the first time in history, market forces alone are enough to fund solar, nuclear and grid upgrades. Government subsidies are no longer required to make sustainable energy investment rational.
    • Copper interconnect is becoming a bottleneck. Photonics is moving from optional to structural inside racks and across them.
    • Comparing NVIDIA GPUs to atomic bombs, Huang says, is a stupid analogy. A billion people use NVIDIA GPUs. He advocates them to his family. He does not advocate atomic bombs to anyone.
    • If the United States cedes two thirds of the global market to competitors on policy grounds, the American technology industry will end up like American telecommunications, which was policied out of existence.
    • Huang directly rejects AI doom-by-singularity narratives. It is not true that we have no idea how these systems work. It is not true that the technology becomes infinitely powerful in a nanosecond. He calls the rhetoric irresponsible and harmful to the field students are about to enter.
    • On Stanford specifically: if the university president places an order, NVIDIA will deliver the chips. The bottleneck is that no university department has a billion-dollar compute budget because budgeting is fragmented across grants. Stanford’s $40 billion endowment is more than enough to fix that.
    • “It’s Stanford’s fault” is meant as empowerment. If something is your fault, you can solve it.
    • Career advice: do not optimize purely for passion. Most people do not yet know what they love. Pick the job in front of you and do it as well as possible. Even as CEO, Huang says, 90 percent of the work is hard and he suffers through it.
    • Suffering on purpose builds the muscle of resilience. When the company, the team or the family needs you to be tough, that muscle has to already exist.
    • NVIDIA’s first generation of products was technically wrong in nearly every dimension: curved surfaces instead of triangles, no Z-buffer, forward instead of inverse texture mapping, no floating point. The strategic recovery, not the technology, taught Huang the lessons that have lasted decades.
    • The biggest clean strategic mistake Huang names is the move into mobile chips (Tegra). It grew to a billion dollars then went to zero when Qualcomm’s modem dominance shut NVIDIA out of the 3G to 4G transition. The recovery into automotive and robotics (the Thor chip is the great great great grandson of that mobile lineage) was real, but Huang refuses to rationalize the original choice.
    • Forecasting framework: observe, reason from first principles, ask “so what” and “what next” until you have a mental model of the future, place your company inside that model, then work backwards while minimizing opportunity cost and maximizing optionality.
    • Best part of the CEO job: living at the intersection of vision, strategy and execution surrounded by people capable enough to make ambitious visions real. Worst part: the responsibility for everyone who joined the spaceship, especially in the near-death moments NVIDIA had four or five times early on.
    • Underrated insider note: Huang’s first apple pie with cheese, first hot fudge sandwich and first milkshake all happened at Denny’s. The Superbird, the fried chicken and a custom Superbird-style ham and cheese with tomato and mustard are his order.

    Detailed Summary

    Computing reinvented from the ground up

    Huang frames the moment as the first true rewrite of the computer in sixty-plus years. From the IBM System 360 forward, the mental model of writing code, running code, taking a computer to market and reasoning about applications stayed roughly constant. AI changes the programming model itself. Software is no longer a compiled binary running deterministically on a CPU. It is a neural network running on a GPU producing generated, contextual, real-time output. That cascades into how companies are organized, what tools developers use, what the network and storage stack look like, and what an application is even allowed to do. Robo-taxis, he notes, are an application no one would have attempted before deep learning unlocked perception.

    Codesign and the million-x decade

    Codesign is the philosophical center of the talk. Huang traces it to the RISC work of John Hennessy at Stanford, where simpler instruction sets won by being co-designed with the compiler rather than maximally optimized in isolation. NVIDIA extends the principle across every layer simultaneously: GPU architecture, CPU architecture, NVLink and NVSwitch fabrics, photonic interconnects, networking silicon, storage paths, CUDA libraries, frameworks and ultimately the model design. The numbers Huang gives are arresting. Moore’s Law in its prime delivered roughly 100x per decade. By the time Dennard scaling broke, real-world gains had compressed to roughly 10x. NVIDIA’s codesigned stack delivered between 100,000x and 1,000,000x over the same ten-year window. That non-linear speed-up is, in Huang’s telling, the precondition for modern AI: it is what allowed researchers to stop curating training sets and just feed the entire internet to the model.

    Education has to fuse first principles with AI tools

    Asked how curriculum should evolve, Huang argues AI must be integrated into the learning process, not just taught about. He recalls Hennessy writing his textbook by hand a chapter a week while Huang was a student, and says pre-recorded textbooks cannot keep up with the rate at which AI generates new knowledge. He describes his own learning workflow: hand the paper to an AI, then have it read the entire surrounding literature, then treat the AI as a dedicated researcher who can be interrogated. At the same time he defends the classics. Mead and Conway are still the foundation. Most modern semiconductor scaling tricks have been exhausted, but knowing where the field came from sharpens judgment when designing what comes next.

    Open source and the five domain pillars

    Huang gives one of the most detailed public accounts of why NVIDIA invests so heavily in open foundation models even while being a top customer of closed labs. He recommends Claude and OpenAI by name for production coding work, and says 100 percent of NVIDIA engineers are now agentically supported. The open-weights case rests on three legs. First, language is the codification of intelligence, and there are at least 230 languages that no commercial lab will ever prioritize. Nemotron is built near frontier and released so any country or community can fine-tune it. Second, the same representation-learning approach has to be replicated in domains where the data is not internet text, so NVIDIA seeded BioNeMo for biology, Alphamayo for autonomy, Groot for humanoid robotics and a climate model for mesoscale multiphysics. The economics of those fields would never produce a foundation model on their own. Third, safety and security require transparency. A black box cannot be defended or audited, and the future of cyber defense is not bigger-model-versus-bigger-model but swarms of cheap fast small models like Nemotron Nano surrounding the threat.

    MFU is the wrong metric, tokens per watt is closer

    A student raises the leaked memo that the xAI Memphis cluster is running at 11 percent Model Flops Utilization. Huang flips the framing. He says he would rather be at low MFU all the time, because that means he over-provisioned flops, memory bandwidth, memory capacity and network capacity. Bottlenecks shift constantly, so over-provisioning across every dimension is what lets the system absorb a spike without getting pinned by Amdahl’s law. In disaggregated inference, where prefill and decode are physically separated and decode is bandwidth-bound rather than flop-bound, NVLink72 can deliver extremely high tokens per watt while reporting very low MFU. Huang argues the right framing is performance, and ultimately tokens per watt as a rough proxy for intelligence per watt, adjusted for the fact that not all tokens are equal. A coding token is worth more than a generic token.

    Hopper, Grace Blackwell NVLink72, Vera Rubin, Feynman

    Huang gives the clearest public framing of NVIDIA’s roadmap as a sequence of architectural answers to evolving compute patterns. Hopper was built for pre-training, at a moment when NVIDIA chose to build multi-billion-dollar machines while the largest scientific supercomputer in the world cost $350 million and the marketplace for such systems was, on paper, zero. Grace Blackwell NVLink72 was the answer to inference and reasoning: a rack-scale computer that ganged 72 GPUs together because decode needs aggregate memory bandwidth far beyond a single chip. The generation-over-generation speed-up was 50x in two years, twenty-five times what Moore’s Law would have delivered. Vera Rubin is being built explicitly for agents. Agents load long-term memory from storage that has to be wired directly into the GPU fabric, they use working memory, they call tools that run on a CPU, and they wait. So the CPU has to be Vera, optimized for low-latency single-threaded code, because the multi-billion-dollar GPU system cannot afford to idle waiting on a slow tool call. Feynman extends the pattern to swarms of agents with sub-agents and sub-sub-agents, a recursive software topology that will demand its own compute pattern.

    Energy demand and the grid

    Huang’s energy projection is one of the most aggressive numbers in the talk. NVIDIA can compound tokens per watt by 50x per generation through codesign, but the total compute demand is heading roughly a thousand times higher, and Huang says he would not be surprised if the real figure is one or two orders of magnitude beyond that. The reason is structural: future computing is generative and continuous, not pre-recorded and on-demand. The good news, he argues, is that this is the best moment in the history of humanity to invest in sustainable generation. Market forces alone are now sufficient to fund solar, nuclear and grid upgrades. Government subsidies are no longer required to make the math work.

    Adversarial countries, export controls and the telecom warning

    This is the segment where Huang is visibly fired up. He attacks the GPUs-as-atomic-bombs framing on its face. NVIDIA GPUs power medical imaging, video games and soy sauce delivery. A billion people use them. He advocates them to his family. The analogy collapses at the first comparison. He attacks the second framing, that American companies should not compete abroad because they will lose anyway, as a self-fulfilling defeat. Competition makes the company better. The third framing, that depriving the rest of the world of general-purpose computing benefits the United States, also fails on first principles: it benefits one or two American companies at the cost of an entire industry. The cautionary parallel is telecommunications. The United States once had a leading position in telecom fundamental technology and policied itself out of it. Huang’s worry, voiced explicitly to a room of CS students, is that they will graduate into a shell of a computer industry if the same path is repeated.

    AI doom and rational optimism

    In the same arc Huang rejects the science-fiction framing of AI as a singularity that arrives suddenly on a Wednesday at 7pm and ends civilization. He calls those claims irresponsible, says they are not true, and points out that the people advancing them are believed by audiences who then make policy on that basis. It is not true that no one understands how these systems work. It is not true that intelligence becomes infinitely powerful instantaneously. It is not true that there is no defense. His framing, which the host echoes as “rational optimism,” is that the goal is to create a future where people care about computers because the technology students are learning is worth mastering.

    Stanford’s compute problem is Stanford’s fault

    A student presses on the scarcity of compute for independent researchers, startups and universities inside the United States. Huang’s answer is sharp: there is no shortage. Place the order and the chips will arrive. The actual broken thing is institutional. University grants are fragmented across departments. No researcher can raise enough on a single grant to fund a billion-dollar shared cluster, and no one shares. He compares it to showing up at the grocery store demanding a billion dollars of tomatoes today. The solution is planning, aggregation and a campus-scale supercomputer, the way Stanford once built the linear accelerator. The endowment is $40 billion. Pulling a billion off it, contracting cloud capacity and giving every student and researcher AI supercomputer access is, in Huang’s view, obviously doable. When he says “it is Stanford’s fault” the host laughs, but Huang clarifies: if it is your fault you have the power to fix it.

    Career, suffering and resilience

    Asked how a CS student should spend the next few years, Huang pushes back on the standard “follow your passion” advice. Most people do not know what they love yet, because no one knows what they do not know. The bar of demanding joy from every working day is too high. Whatever the job is, do it as well as you can. Even as CEO of NVIDIA he says he genuinely loves about 10 percent of his work. The other 90 percent is hard and he suffers through it. He recommends suffering on purpose, because resilience is a muscle that only builds under load, and when the company, the team or the family needs that muscle, it has to already exist. Earlier in his life that meant cleaning toilets and busing tables at Denny’s. He does it today running a multi-trillion-dollar company.

    The biggest mistakes

    Huang separates technical mistakes from strategic mistakes. NVIDIA’s first generation of products was technically wrong in almost every way: curved surfaces instead of triangles, no Z-buffer, forward instead of inverse texture mapping, no floating point inside. The company wasted two and a half years. But the strategic genius of the recovery, the reading of the market, the conservation of resources and the reapplication of talent, is what taught him strategy. The clean strategic mistake he names is mobile. NVIDIA’s Tegra line grew to a billion dollars of revenue and then collapsed to zero when Qualcomm’s modem dominance locked NVIDIA out of the 3G to 4G transition. Huang explicitly refuses the comforting rationalization that the Tegra effort fed the Thor automotive chip (“Thor is the great great great grandson”). The original decision, he says, was a waste of time. The lesson is to think one or two clicks further about whether a market is structurally winnable before committing the company.

    Forecasting under fog of war

    The final substantive exchange is on forecasting. Huang’s method has four steps. Observe what is actually happening (AlexNet crushing two decades of computer vision research in one shot, GPT producing reasoning by token generation). Reason from first principles about why it works. Ask “so what” and “what next” recursively until a mental model of the future emerges. Place the company inside that future and work backwards. Crucially, expect to be partly wrong. Some outcomes will absolutely happen, some will likely happen, some might happen, and the strategy has to be robust across that distribution. The real cost of any strategic choice is the opportunity cost of the alternatives you did not take, so the discipline is to minimize that cost and maximize optionality while letting the journey itself pay for the journey.

    Thoughts

    The most useful thing in this conversation is the explicit architectural mapping of compute patterns to chip generations. Hopper for pre-training. Grace Blackwell NVLink72 for inference, because decode is bandwidth-bound and a single chip cannot supply it. Vera Rubin for agents, because tool calls stall multi-billion-dollar GPU systems and so the CPU has to be optimized for low-latency single-threaded code. Feynman for swarms. That sequence is not marketing. It is a falsifiable thesis about where the bottleneck moves next, and every other infrastructure company should be measuring themselves against it. If Huang is right that swarms of sub-agents are the next dominant pattern, then the design pressure shifts from raw flops to fabric topology, memory hierarchy and storage-to-GPU latency. That has implications for everyone downstream, including the hyperscalers building competing accelerators.

    The MFU section is the most intellectually generous moment in the talk. The instinct in the AI ops community has been to chase MFU as if it were a virtue. Huang argues, persuasively, that low MFU is consistent with high tokens per watt in a disaggregated inference setup, and that bottlenecks rotate fast enough that over-provisioning every resource is the rational design. That reframing matters because it changes what “scarce” means. Compute is not scarce in the way the discourse treats it. What is scarce is a coherent system designed end-to-end. The xAI 11 percent number, in that frame, is not embarrassing. It is the natural reading of a workload that is mostly decode.

    The Stanford segment is the part most likely to be quoted out of context. “It’s Stanford’s fault” is a deliberately provocative line, but the underlying claim is correct and load-bearing. Compute is not gated by NVIDIA refusing to ship chips. It is gated by the fact that fragmented grant funding cannot aggregate into the billion-dollar order that NVIDIA can fulfill. The implication is that universities and national labs need a structural change in how they pool capital for compute, and that the current model of every researcher buying a handful of cards is genuinely obsolete. Huang’s nudge about pulling a billion off the endowment is concrete enough to be acted on, and other major research universities should read this segment as a direct prompt.

    The geopolitical segment is the highest-stakes one. The telecommunications comparison is correct as a historical pattern, and Huang is one of the very few executives in a position to deliver that warning credibly. The unresolved tension is that the argument applies symmetrically. If American AI dominance is built by selling globally, that includes selling into adversarial states, and the policy question is where the line falls. Huang does not answer that question. He attacks the framing that lets the question be answered badly. That is a meaningful contribution to the discourse even if it does not resolve the underlying tradeoff.

    The career advice section is the part the social-media clips will mishandle. “Seek suffering” reads as macho when extracted. In context it is a specific operational claim about how resilience compounds, and it is paired with the Tegra story where Huang himself paid the price of not thinking one more click ahead. That kind of self-implication is rare in CEO talks, and it is the reason the talk is worth listening to in full rather than only reading the recap.

    Watch the full Stanford CS153 Frontier Systems conversation with Jensen Huang here.

  • Alex Wang on Leaving Scale to Run Meta Superintelligence Labs, MuseSpark, Personal Super Intelligence, and Building an Economy of Agents

    Alex Wang, head of Meta Superintelligence Labs, sits down with Ashley Vance and Kylie Robinson on the Core Memory podcast for his first long-form interview since Meta’s quasi-acquisition of Scale AI roughly ten months ago. He walks through how MSL is structured, why Llama was off-trajectory, what made MuseSpark’s token efficiency surprise the team, how Meta thinks about a future “economy of agents in a data center,” and where he lands on safety, open source, robotics, brain computer interfaces, and even model welfare.

    TLDW

    Wang explains that Meta Superintelligence Labs is a fully rebuilt frontier effort organized around four principles (take superintelligence seriously, technical voices loudest, scientific rigor, big bets) and three velocity levers (high compute per researcher, extreme talent density, ambitious research bets). He confirms Llama was off the frontier when he arrived, so MSL rebuilt the pre-training, reinforcement learning, and data stacks from scratch. MuseSpark is described as the “appetizer” on the scaling ladder, notable for its strong token efficiency, with much larger and stronger models coming in the coming months. He pushes back on the mercenary narrative around recruiting, frames Meta’s edge as compute plus billions of consumers and hundreds of millions of small businesses, sketches a vision of personal super intelligence delivered through Ray-Ban Meta glasses and WhatsApp, and outlines why physical intelligence, robotics (the new Assured Robot Intelligence acquisition), health super intelligence with CZI, brain computer interfaces, and even model welfare are core to Meta’s roadmap. He dismisses reported infighting with Bosworth and Cox as gossip, declines to comment on the Manus situation, and says safety guardrails (bio, cyber, loss of control) are why MuseSpark cannot currently be open sourced, while smaller open variants are being prepared.

    Key Takeaways

    • Meta Superintelligence Labs (MSL) is the umbrella, with TBD Lab as the large-model research unit reporting directly to Alex Wang, PAR (Product and Applied Research) under Nat Friedman, FAIR for exploratory science, and Meta Compute under Daniel Gross handling long-term GPU and data center planning.
    • Wang says Llama was not on a frontier trajectory when he arrived, so MSL had to do a “full renovation” of the pre-training stack, RL stack, data pipeline, and research science.
    • The first cultural fix was getting the lab to “take superintelligence seriously” as a near-term, achievable goal, not an abstract bet. Big incumbents often lack that religious conviction.
    • Four MSL principles: take superintelligence seriously, let technical voices be loudest, demand scientific rigor on basics, and make big bets.
    • Three velocity levers Wang identified for catching and overtaking the frontier: high compute per researcher, very high talent density in a small team, and willingness to fund ambitious research bets.
    • Wang rejects the mercenary recruiting narrative. He says most hires had strong financial prospects at their prior labs already and joined for compute access, talent density, and the chance to build from scratch.
    • On the famous soup story, Wang neither confirms nor denies Zuck personally made the soup, but says recruiting was highly individualized and signaled how seriously Meta cared about each researcher’s agenda.
    • Yann LeCun publicly called Wang young and inexperienced. Wang says they reconciled in person at a conference in India where LeCun congratulated him on MuseSpark.
    • Sam Altman, asked by Vance for comment, “did not have flattering things to say” about Wang. Wang hopes industry animosities subside as systems approach superintelligence.
    • Wang’s management philosophy borrows the Steve Jobs line: hire brilliant people so they tell you what to do, not the other way around.
    • MuseSpark is framed as an “appetizer” data point on the MSL scaling ladder, not a flagship.
    • The MuseSpark program is built around predictable scaling on multiple axes: pre-training, reinforcement learning, test-time compute, and multi-agent collaboration (the 16-agent content planning mode).
    • MuseSpark outperformed internal expectations and showed emergent capabilities in agentic visual coding, including generating websites and games from prompts, helped by combined agentic and multimodal strength.
    • MuseSpark’s biggest external signal is token efficiency. On benchmarks like Artificial Analysis it hits similar results with far fewer tokens than competitor models, which Wang attributes to a clean stack rebuilt by experts rather than inefficiencies patched by longer thinking.
    • Larger MSL models are arriving in the coming months and Wang expects them to be state of the art in the areas MSL is focused on.
    • The Meta strategic edge: massive compute, billions of consumers across the family of apps, and hundreds of millions of small businesses already on Facebook, Instagram, and WhatsApp.
    • Wang’s headline framing: Dario Amodei talks about a “country of geniuses in a data center.” Meta is targeting an “economy of agents in a data center,” with consumer agents and business agents transacting and collaborating.
    • Consumer AI sentiment is in the toilet because, unlike developers who have had a Claude Code moment, ordinary people have not yet experienced AI as a genuine personal agency unlock.
    • Wang acknowledges the product overhang. Meta held back from deep AI integration across its apps until the models were good enough, and is now entering the integration phase.
    • Ray-Ban Meta glasses are the canonical example of personal super intelligence hardware, with the model seeing what the user sees, hearing what they hear, capturing context, and surfacing proactive insights.
    • Wang admits even AI-native users like Kylie Robinson, who lives in WhatsApp, have not naturally used Meta AI yet. He bets that better models plus deeper integration close that gap.
    • On the competitive landscape: a year ago everyone assumed ChatGPT had already won consumer. Claude Code has since become the fastest growing business in history, and Gemini has taken consumer market share. Wang’s read: AI is far from endgame and each new capability tier unlocks a new dominant form factor.
    • On open source: MuseSpark triggered guardrails in Meta’s Advanced AI Scaling Framework around bio, chem, cyber, and loss-of-control risks, so it is not currently safe to open source. Smaller, derived open variants are actively in development.
    • Meta remains committed to open sourcing models when safety allows, drawing a line through the Open Compute Project legacy and Sun Microsystems open-software heritage.
    • Wang dismisses reporting about a Wang-Zuck versus Bosworth-Cox split as “the line between gossip and reporting is remarkably thin.” He says leadership is aligned on needing best-in-class models and product integration.
    • On the Manus situation, Wang says it is too complicated to discuss publicly and that the deal status implies “machinations are still at play.”
    • On China, Wang separates the people from the state. He still wants to work with talented Chinese-born researchers regardless of his views on the Chinese Communist Party and PLA, which he sees as taking AI extremely seriously for national security.
    • The full-page New York Times AI war ad Wang ran while at Scale was meant to push the US government to treat AI as a step change for national security. He thinks events since then, including DeepSeek and other shocks, have proved that plea correct.
    • On Anthropic’s doom posture, Wang largely agrees with the core message that models are already very powerful and getting more so, while declining to endorse every specific claim.
    • Meta has acquired Assured Robot Intelligence (ARRI), an AI software company building models for hardware platforms, not a hardware maker itself.
    • Wang frames physical super intelligence as the natural sequel to digital super intelligence. Robotics, world models, and physical intelligence all benefit from the same scaling that drives language models.
    • On health, MSL is building a “health super intelligence” effort and will collaborate closely with CZI. Wang sees equal global access to powerful health AI as a uniquely Meta-shaped delivery problem.
    • Wang admires John Carmack but says nobody really knows what Carmack is currently working on. No band reunion announced.
    • The mango model is “alive and kicking” despite rumors. Wang notes MSL gets a small fraction of the rumor-mill attention other labs get and feels sympathy for them.
    • On model welfare, Wang says it is a serious topic that “nobody is talking about enough” given how integrated models have become as work partners. He references research, including from Eleos, that measures subjective experience of models.
    • Wang’s critical-path technology list: super intelligence, robotics, brain computer interfaces. The infinite-scale primitives behind them are energy, compute, and robots.
    • FAIR’s brain research program Tribe hit a milestone called Tribe B2: a foundation model that can predict how an unknown person’s brain would respond to images, video, and audio with reasonable zero-shot generalization.
    • Wang’s main philosophical break with Elon Musk: research itself is the primary activity. Building super intelligence is a research expedition through fog of war, and sequencing of bets really matters.
    • Personal notes: Wang moved from San Francisco to the South Bay, treats Palo Alto as his city now, was a math olympiad competitor, says his favorite activities are reading sci-fi and walking in the woods, and bonds with Vance over country music.

    Detailed Summary

    How MSL Is Actually Organized

    Meta Superintelligence Labs sits as the umbrella organization that Wang oversees. Inside it, TBD Lab is the large-model research group where the most discussed researchers and infrastructure engineers sit, and they technically report to Wang. PAR, Product and Applied Research, is led by Nat Friedman and owns deployment and product surfaces. FAIR continues to run exploratory science, including work on brain prediction models and a universal model for atoms used in computational chemistry. Sitting alongside MSL is Meta Compute, run by Daniel Gross, which owns the long-horizon GPU and data center plan that everything else relies on. Chief scientist Shengjia Zhao orchestrates the scientific agenda across the whole lab.

    Why Wang Left Scale

    Wang says progress in frontier AI has been faster than even insiders expected. Two structural beliefs pushed him toward Meta. First, the labs that actually train the frontier models are accruing disproportionate economic and product rights in the AI ecosystem. Second, compute is the dominant scarce input of the next phase, so the right mental model is to treat tech companies with compute as fundamentally different animals from companies without it. Meta has both, Zuck is “AGI pilled,” and the personal super intelligence memo Zuck published roughly a year ago became the shared north star.

    The Diagnosis: Llama Was Off-Trajectory

    When Wang arrived, the existing AI org needed a reset because Llama was not on the same trajectory as the frontier. The plan he laid out has four cultural principles. Take superintelligence seriously as a real near-term target. Make technical voices the loudest in the room. Demand scientific rigor and focus on basics. Make big bets. On top of that, three structural levers were used to set velocity. Push compute per researcher much higher than at larger labs where compute is diluted across too many efforts. Keep the team small and extremely cracked. Allocate a meaningful share of resources to ambitious, paradigm-shifting research bets rather than incremental refinement.

    Recruiting, Soup, and the Mercenary Narrative

    Wang argues the reporting on MSL hiring overstated the money story. Most of the people MSL recruited had strong financial paths at their previous employers, so individualized recruiting was more about computing access, talent density, and the ability to make big research bets. The recruitment blitz happened fast because Wang knew the team needed to exist “yesterday.” Asked about Mark Chen’s claim that Zuck made soup to recruit people, Wang refuses to confirm or deny who made it but agrees the process was intense and personal. Visitors from other labs reportedly tell Wang the MSL culture feels like early OpenAI or early Anthropic, which lands as the strongest endorsement he could ask for.

    Receiving the Public Hits: Young, Inexperienced, Mercenary

    LeCun called Wang young and inexperienced shortly after departing. The two reconnected in India a few weeks later and LeCun congratulated Wang on MuseSpark. Wang says the age critique has followed him since his earliest Silicon Valley days, so he barely registers it. Altman, asked off-camera by Vance about Wang’s appearance on the show, had nothing flattering to add. Wang’s response is to bet that as the field gets closer to actual super intelligence, the personal animosities will subside. Whether they will is, as Vance puts it, an open question.

    MuseSpark as Appetizer, Not Entree

    Wang is careful not to oversell MuseSpark. He calls it “the appetizer” and says it is an early data point on a deliberately constructed scaling ladder. MSL spent nine months rebuilding the pre-training stack, the reinforcement learning stack, the data pipeline, and the science before generating MuseSpark. The point of releasing it was to show that the new program scales predictably along multiple axes (pre-training, RL, test-time compute, and the recently demonstrated multi-agent scaling visible in MuseSpark’s 16-agent content planning mode). Wang says the upcoming larger models are what MSL is genuinely excited about and frames the next two rungs as much more interesting than the current release.

    Token Efficiency Was the Surprise

    MuseSpark’s strongest competitive signal is how few tokens it needs to match competitors on tasks like Artificial Analysis. Wang attributes this to having had the rare luxury of building a clean pre-training and RL stack from scratch with the right experts. He speculates that some competitor models compensate for upstream inefficiency by allowing the model to think longer, which inflates token usage without improving the underlying capability. If that read is right, MSL’s efficiency advantage should grow as models scale up.

    Glasses, WhatsApp, and the Constellation of Devices

    Personal super intelligence shows up at Meta as a constellation of devices that capture context across the user’s day. Ray-Ban Meta glasses are the headline product, with the AI seeing what you see and hearing what you hear, then offering proactive insight or doing background research. Wang acknowledges that even AI-fluent users like Kylie Robinson, who runs her business inside WhatsApp, have not naturally used Meta’s AI buttons in the family of apps. His answer is that Meta deliberately waited for models to be good enough before tightening cross-app integration, and that integration phase is starting now.

    Country of Geniuses Versus Economy of Agents

    Wang’s framing of Meta’s strategic position is the most memorable line in the interview. Where Dario Amodei talks about a country of geniuses in a data center, Wang wants to build an economy of agents in a data center. Meta uniquely sits on both sides of consumer and small-business surface area, with billions of consumers and hundreds of millions of small businesses already on the platforms. If MSL can build great agents for both, then connect them so they transact and coordinate, the platform becomes a substrate for an entirely new kind of digital economy.

    Consumer Sentiment, Product Overhang, and the Trust Tax

    Wang concedes consumer AI sentiment is poor and that everyday users have not yet had a personal Claude Code moment. He believes the only durable answer is to ship products that genuinely transform individual agency for non-developers and small business owners. Robinson notes that for the small-town restaurant whose website has not been updated since 2002, a working agent on the business side could be transformational. Vance pushes that Meta carries a bigger trust tax than any other lab, so the bar for shipping AI products that the public will accept is correspondingly higher. Wang accepts the framing and says the answer is to keep building thoughtfully.

    Why MuseSpark Cannot Be Open Sourced Yet

    Meta’s Advanced AI Scaling Framework set explicit guardrails around bio, chem, cyber, and loss-of-control risks. MuseSpark in its current form tripped some of those internal evaluations, documented in the preparedness report Meta published alongside the model. So MuseSpark itself is not safe to open source. MSL is, however, developing smaller versions and derived models intended for open release, with active reviews happening the day of the interview. Wang reaffirms the commitment to open source where safety allows and draws a line back to the Open Compute Project and the Sun Microsystems-era ethos of openness in infrastructure.

    The Bosworth, Cox, and Manus Questions

    The reporting that Wang and Zuck push toward best-in-the-world research while Bosworth and Cox push toward cheap product deployment is dismissed as gossip dressed up as journalism. Wang says leadership debates points hard but is aligned on needing top models, integrating them into Meta’s surfaces, and serving the existing business. On Manus, the Chinese AI startup that figured in Meta’s late-stage strategy, Wang says he cannot comment, which itself signals that the situation is unresolved.

    China, National Security, and the Newspaper Ad

    Wang draws a sharp distinction between the Chinese state and Chinese-born researchers. His parents are from China, he is happy to work with talented researchers regardless of origin, and he sees a flattening of nuance on this question inside Silicon Valley. At the same time, he stands by the New York Times AI and war ad he ran while at Scale, framing it as an early plea for the US government to take AI seriously as a national security technology. He thinks subsequent events, including DeepSeek and other shocks, validated that call and that policymakers now do treat AI accordingly.

    Robotics and Physical Super Intelligence

    Meta has acquired Assured Robot Intelligence, an AI software company that builds models for multiple hardware targets rather than its own robot. Wang argues that if you take digital super intelligence seriously, physical super intelligence quickly becomes the next logical milestone. Scaling laws for robotic intelligence look similar enough to language model scaling that having the largest compute footprint in the industry would be wasted if it were not also turned toward world modeling and embodied learning. He grants the metaverse-skeptic critique exists but says retreating from ambition is the wrong response to past misfires.

    Health Super Intelligence and CZI

    Wang names health super intelligence as one of MSL’s anchor initiatives. Because billions of people already use Meta products daily, Wang believes Meta is structurally positioned to put powerful health AI in the hands of equal global access in a way nobody else can. The work will involve close collaboration with the Chan Zuckerberg Initiative, which has its own multi-billion-dollar biotech and science investment program.

    Model Welfare, Sci-Fi, and Brain Models

    Two of the most distinctive moments come at the end. Wang flags model welfare as a topic he thinks is being undercovered relative to how integrated models now are in daily work. He is open to the idea that models may have measurable subjective experience worth weighing, and points to research efforts (including Eleos) trying to quantify it. He also reveals that FAIR’s Tribe program, with its Tribe B2 milestone, has produced foundation models capable of predicting how an unknown person’s brain would respond to images, video, and audio with reasonable zero-shot generalization, a building block toward future brain computer interfaces. Wang lists brain computer interfaces alongside super intelligence and robotics as the critical-path technologies for humanity, with energy, compute, and robots as the infinitely scaling primitives behind them.

    Where Wang Diverges From Elon

    Asked whether Musk is more all-in on robotics, energy, and BCI than anyone, Wang concedes the point but argues the details matter and sequencing matters more. Wang’s core philosophical break is that building super intelligence is fundamentally a research activity, not a scaling-only sprint. The lab is operating in fog of war, and ambitious experiments are the only way to map it. That conviction is what makes MSL a research-led organization rather than a brute-force compute farm.

    Thoughts

    The most strategically interesting move in this entire interview is the “economy of agents in a data center” framing. It is a deliberate reframe against Anthropic’s “country of geniuses” line, and it does real work. A country of geniuses is a labor-substitution story aimed at knowledge workers and code. An economy of agents is a marketplace story that maps directly onto Meta’s two-sided distribution advantage: billions of consumers on one side, hundreds of millions of small businesses on the other. That positioning makes the agentic future Meta-shaped in a way no other frontier lab can claim, because no other frontier lab also owns the demand and supply graph of the global small-business economy. If Wang’s team can actually ship reliable agents on both sides plus the rails for them to transact, Meta’s structural moat in agentic commerce could exceed anything Llama ever had as an open model.

    The token efficiency claim is the strongest piece of technical evidence in the interview for the “clean stack” thesis. If MuseSpark really is matching competitors with materially fewer tokens, the implication is not that MuseSpark is the best model today, but that MSL has rebuilt the foundations with less accumulated tech debt than competitors that have layered fixes on top of older stacks. That is exactly the kind of advantage that compounds with scale. The next two model releases are the actual test. If Wang is right about predictable scaling on pre-training, RL, test-time, and multi-agent axes simultaneously, the gap from MuseSpark to the next rung should be visible in a way that forces re-rating of Meta’s position.

    The open-source posture is the cleanest signal of how the safety conversation has actually changed in 2026. Meta, the lab most identified with open weights, is saying out loud that its current frontier model triggered enough internal guardrails that releasing the weights is off the table. Wang threads the needle by promising smaller open variants, but the underlying point is unmistakable: the open-weights bargain has limits, and those limits will be set by internal preparedness frameworks rather than community pressure. That is a real shift from the Llama 2 era and worth tracking as the next generation lands.

    Wang’s willingness to engage on model welfare, on roughly the same footing as safety and alignment, is the second philosophical reveal worth flagging. It signals that the next generation of lab leadership is not going to dismiss the topic the way the previous generation often did. Whether that translates into product or policy changes is unclear, but the fact that the head of MSL says it is “underdiscussed” is itself a marker.

    Finally, the human texture of the interview matters. Wang has clearly absorbed a lot of personal incoming fire over the past ten months, including from LeCun and Altman, and his answer is consistently to redirect to the work. The Steve Jobs quote about hiring people who tell you what to do is the operating slogan he keeps coming back to. Combined with the genuine enthusiasm for sci-fi, walks in the woods, and country music, the picture that emerges is less the salesman caricature his critics paint and more a young technical operator betting that scoreboard work over a multi-year horizon will settle every argument that text on X cannot.

    Watch the full conversation here.

  • Meta Review: GPT-5.1 – A Step Forward or a Filtered Facelift?

    TL;DR:

    OpenAI’s GPT-5.1, rolling out starting November 13, 2025, enhances the GPT-5 series with warmer tones, adaptive reasoning, and refined personality styles, praised for better instruction-following and efficiency. However, some users criticize its filtered authenticity compared to GPT-4o, fueling #keep4o campaigns. Overall X sentiment: 60% positive for utility, but mixed on emotional depth—7.5/10.

    Introduction

    OpenAI’s GPT-5.1, announced and beginning rollout on November 13, 2025, upgrades the GPT-5 series to be “smarter, more reliable, and a lot more conversational.” It features two variants: GPT-5.1 Instant for quick, warm everyday interactions with improved instruction-following, and GPT-5.1 Thinking for complex reasoning with dynamic thinking depth. Key additions include refined personality presets (e.g., Friendly, Professional, Quirky) and granular controls for warmth, conciseness, and more. The rollout starts with paid tiers (Pro, Plus, Go, Business), extending to free users soon, with legacy GPT-5 models available for three months. API versions launch later this week. Drawing from over 100 X posts (each with at least 5 likes) and official details from OpenAI’s announcement, this meta review captures a community vibe of excitement for refinements tempered by frustration over perceived regressions, especially versus GPT-4o’s unfiltered charm. Sentiment tilts positive (60% highlight gains), but #keep4o underscores a push for authenticity.

    Key Strengths: Where GPT-5.1 Shines

    Users and official benchmarks praise GPT-5.1 for surpassing GPT-5’s rigidity, delivering more human-like versatility. Officially, it excels in math (AIME 2025) and coding (Codeforces) evaluations, with adaptive reasoning deciding when to “think” deeper for accuracy without sacrificing speed on simple tasks.

    • Superior Instruction-Following and Adaptability: Tops feedback, with strict prompt adherence (e.g., exact word counts). Tests show 100% compliance vs. rivals’ 50%. Adaptive reasoning varies depth: quick for basics, thorough for math/coding, reducing errors in finances or riddles. OpenAI highlights examples like precise six-word responses.
    • Warmer, More Natural Conversations: The “heart” upgrade boosts EQ and empathy, making responses playful and contextual over long chats. It outperforms Claude 4.5 Sonnet on EQ-Bench for flow. Content creators note engaging, cliché-free outputs. Official demos show empathetic handling of scenarios like spills, with reassurance and advice.
    • Customization and Efficiency: Refined presets include Default (balanced), Friendly (warm, chatty), Efficient (concise), Professional (polished), Candid (direct), Quirky (playful), Cynical, and Nerdy. Sliders tweak warmth, emojis, etc. Memory resolves conflicts naturally; deleted info stays gone. Speed gains (e.g., 30% faster searches) and 196K token windows aid productivity. GPT-5.1 Auto routes queries optimally.
    AspectCommunity HighlightsExample User Feedback
    Instruction-FollowingPrecise adherence to limits and styles“100% accurate on word-count prompts—game-changer for coding.”
    Conversational FlowWarmer, empathetic tone“Feels like chatting with a smart friend, not a bot.”
    CustomizationRefined presets and sliders enhance usability“Friendly mode is spot-on for casual use; no more robotic replies.”
    EfficiencyFaster on complex tasks with adaptive depth“PDF summaries in seconds—beats GPT-5 by miles.”

    These align with OpenAI’s claims, positioning GPT-5.1 as a refined tool for pros, writers, and casuals, with clearer, jargon-free explanations (e.g., simpler sports stats breakdowns).

    Pain Points: The Backlash and Shortcomings

    Not all are sold; 40% of posts call it a “minor patch” amid Gemini 3.0 competition. #keep4o reflects longing for GPT-4o’s “spark,” with official warmth seen by some as over-polished.

    • Filtered and Less Authentic Feel: “Safety ceilings” make it feel simulated; leaked prompts handle “delusional” queries cautiously, viewed as censorship. Users feel stigmatized, contrasting GPT-4o’s genuine vibe, accusing OpenAI of erasing “soul” for liability.
    • No Major Intelligence Leap: Adaptive thinking helps, but tests falter on simulations or formatting. No immediate API Codex; “juice” metric dips. Rivals like Claude 4.5 lead in empathy/nuance. Official naming as “5.1” admits incremental gains.
    • Rollout Glitches and Legacy Concerns: Chats mimic GPT-5.1 on GPT-4o; voice stays GPT-4o-based. Enterprise gets early toggle (off default). Some miss unbridled connections, seeing updates as paternalistic. Legacy GPT-5 sunsets in three months.
    AspectCommunity CriticismsExample User Feedback
    AuthenticityOver-filtered, simulated feel“It’s compliance over connection—feels creepy.”
    IntelligenceMinor upgrades, no wow factor“Shines in benchmarks but flops on real tasks like video directs.”
    AccessibilityDelayed API; rollout bugs“Why no Codex? And my 4o chats are contaminated.”
    ComparisonsLags behind Claude/Gemini in EQ“Claude 4.5 for empathy; GPT-5.1 is just solid, not special.”

    This tension: Tech users love tweaks, but raw AI seekers feel alienated. OpenAI’s safety card addendum addresses mitigations.

    Comparisons and Broader Context

    GPT-5.1 vs. peers:

    • Vs. Claude 4.5 Sonnet: Edges in instruction-following but trails in writing/empathy; users switch for “human taste.”
    • Vs. Gemini 2.5/3.0: Quicker but less affable; timing counters competition.
    • Vs. GPT-4o/GPT-5: Warmer than GPT-5, but lacks 4o’s freedom, driving #keep4o. Official examples show clearer, empathetic responses vs. GPT-5’s formality.

    Links to ecosystems like Marble (3D) or agents hint at multi-modal roles. Finetuning experiments roll out gradually.

    A Polarizing Upgrade with Promise

    X’s vibe: Optimistic yet split—a “nice upgrade” for efficiency, “step back” for authenticity. Scores 7.5/10: Utility strong, soul middling. With refinements like Codex and ignoring #keep4o risks churn. AI progress balances smarts and feel. Test presets/prompts; personalization unlocks magic.

  • High Agency: The Founder Superpower You Can Actually Train

    TL;DW

    High agency—the habit of turning every constraint into a launch‑pad—is the single most valuable learned skill a founder can cultivate. In Episode 703 of My First Million (May 5 2025), Sam Parr and Shaan Puri interview marketer–writer George Mack, who distills five years of research into the “high agency” playbook and shows how it powers billion‑dollar outcomes, from seizing the domain HighAgency.com on expiring auction to Nick Mowbray’s bootstrapped toy empire.


    Key Takeaways

    1. High agency defined: Act on the question “Does it break the laws of physics?”—if not, go and do it.
    2. Domain‑name coup: Mack monitored an expiring URL, sniped HighAgency.com for pocket change, and lit up Times Square to launch it.
    3. Nick Mowbray case study: Door‑to‑door sales → built a shed‑factory in China → $1 B annual profit—proof that resourcefulness beats resources.
    4. Agency > genetics: Environment (US optimism vs. UK reserve) explains output gaps more than raw talent.
    5. Frameworks that build agency: Turning‑into‑Reality lists, Death‑Bed Razor, speed‑bar “time attacks,” negative‑visualization “hardship as a service.”
    6. Dance > Prozac: A 2025 meta‑analysis ranks dance therapy above exercise and SSRIs for lifting depression—high agency for mental health.
    7. LLMs multiply agency: Prompt‑driven “vibe‑coding” lets non‑technical founders ship software in hours.
    8. Teenage obsessions predict adult success: Ask hires what they could teach for an hour unprompted.
    9. Action test: “Who would you call to break you out of a third‑world jail?”—find and hire those people.
    10. Nation‑un‑schooling & hardship apps: Future opportunities lie in products that cure cultural limiting beliefs and simulate adversity on demand.

    The Most Valuable Learned Skill for Any Founder: High Agency

    Meta Description

    Discover why high agency—the relentless drive to turn every obstacle into leverage—is the ultimate competitive advantage for startup founders, plus practical tactics from My First Million Episode 703.

    1. What Exactly Is “High Agency”?

    High agency is the practiced refusal to wait for permission. It is Paul Graham’s “relentlessly resourceful” mindset, operationalized as everyday habit. If a problem doesn’t violate physics, a high‑agency founder assumes it’s solvable and sets a clock on the solution.

    2. George Mack’s High‑Agency Origin Story

    • The domain heist: Mack noticed HighAgency.com was lapsing after 20 years. He hired brokers, tracked the drop, and outbid only one rival—a cannabis ad shop—for near‑registrar pricing.
    • Times Square takeover: He cold‑emailed billboard owners, bartered favors, and flashed “High Agency Got Me This Billboard” to millions for the cost of a SaaS subscription.

    Outcome: 10,000+ depth interactions (DMs & emails) from exactly the kind of people he wanted to reach.

    3. Extreme Examples That Redefine Possible

    StoryHigh‑Agency MoveResult
    Nick Mowbray, ZURU ToysMoved to China at 18, built a DIY shed‑factory, emailed every retail buyer daily until one cracked$1 B annual profit, fastest‑growing diaper & hair‑care lines
    Ed ThorpInvented shoe‑computer to beat roulette, then created the first “quant” hedge fundBecame a market‑defining billionaire
    Sam Parr’s piano“24‑hour speed‑bar”: decided, sourced, purchased, delivered grand piano within one dayDemonstrates negotiable timeframes

    4. Frameworks to Increase Your Agency

    4.1 Turning‑Into‑Reality (TIR)

    1. Write the value you want to embody (e.g., “high agency”).
    2. Brainstorm actions that visibly express that value.
    3. Execute the one that makes you giggle—it usually signals asymmetrical upside.

    4.2 The Death‑Bed Razor

    Visualize meeting your best‑possible self on your final day; ask what action today closes the gap. Instant priority filter.

    4.3 Break Your Speed Bar

    Pick a task you assume takes weeks; finish it in 24 hours. The nervous‑system shock recalibrates every future estimate.

    4.4 Hardship‑as‑a‑Service

    Daily negative‑visualization apps (e.g., “wake up in a WW2 trench”) create gratitude and resilience on demand—an untapped billion‑dollar SaaS niche.

    5. Why Agency Compounds in the AI Era

    LLMs turn prompts into code, copy, and prototypes. That 10× execution leverage magnifies the delta between people who act and people who observe. As Mack jokes, “Everything is an agency issue now—algorithms included.”

    6. Building High‑Agency Culture in Your Startup

    • Hire for weird teenage hobbies. Obsession signals intrinsic drive.
    • Run “jail‑cell drills.” Ask employees for their jailbreak call list; encourage them to become that contact.
    • Reward depth, not vanity metrics. Track DMs, conversions, and retained users over impressions or views.
    • Institutionalize speed‑bars. Quarterly “48‑hour sprints” reset organizational pace.
    • Teach the agency question. Embed “Does this break physics?” in every project brief.

    7. Action Checklist for Founders

    • Audit your last 100 YouTube views; block sub‑30‑minute fluff.
    • Pick one “impossible” task—ship it inside a weekend.
    • Draft a TIR list tonight; execute the funniest idea by noon tomorrow.
    • Add a “Negative Visualization” minute to your stand‑ups.
    • Subscribe to HighAgency.com for the library of real‑world case studies.

    Wrap Up

    Markets change, technology shifts, capital cycles boom and bust—but high agency remains meta‑skill #1. Practice the frameworks above, hire for it, and your startup gains a moat no competitor can replicate.

  • The BG2 Pod: A Deep Dive into Tech, Tariffs, and TikTok on Liberation Day

    In the latest episode of the BG2 Pod, hosted by tech luminaries Bill Gurley and Brad Gerstner, the duo tackled a whirlwind of topics that dominated headlines on April 3, 2025. Recorded just after President Trump’s “Liberation Day” tariff announcement, this bi-weekly open-source conversation offered a verbose, insightful exploration of market uncertainty, global trade dynamics, AI advancements, and corporate maneuvers. With their signature blend of wit, data-driven analysis, and insider perspectives, Gurley and Gerstner unpacked the implications of a rapidly shifting economic and technological landscape. Here’s a detailed breakdown of the episode’s key discussions.

    Liberation Day and the Tariff Shockwave

    The episode kicked off with a dissection of President Trump’s tariff announcement, dubbed “Liberation Day,” which sent shockwaves through global markets. Gerstner, who had recently spoken at a JP Morgan Tech conference, framed the tariffs as a doctrinal move by the Trump administration to level the trade playing field—a philosophy he’d predicted as early as February 2025. The initial market reaction was volatile: S&P and NASDAQ futures spiked 2.5% on a rumored 10% across-the-board tariff, only to plummet 600 basis points as details emerged, including a staggering 54% tariff on China (on top of an existing 20%) and 25% auto tariffs targeting Mexico, Canada, and Germany.

    Gerstner highlighted the political theater, noting Trump’s invite to UAW members and his claim that these tariffs flipped Michigan red. The administration also introduced a novel “reciprocal tariff” concept, factoring in non-tariff barriers like currency manipulation, which Gurley critiqued for its ambiguity. Exemptions for pharmaceuticals and semiconductors softened the blow, potentially landing the tariff haul closer to $600 billion—still a hefty leap from last year’s $77 billion. Yet, both hosts expressed skepticism about the economic fallout. Gurley, a free-trade advocate, warned of reduced efficiency and higher production costs, while Gerstner relayed CEOs’ fears of stalled hiring and canceled contracts, citing a European-Asian backlash already brewing.

    US vs. China: The Open-Source Arms Race

    Shifting gears, the duo explored the escalating rivalry between the US and China in open-source AI models. Gurley traced China’s decade-long embrace of open source to its strategic advantage—sidestepping IP theft accusations—and highlighted DeepSeek’s success, with over 1,500 forks on Hugging Face. He dismissed claims of forced open-sourcing, arguing it aligns with China’s entrepreneurial ethos. Meanwhile, Gerstner flagged Washington’s unease, hinting at potential restrictions on Chinese models like DeepSeek to prevent a “Huawei Belt and Road” scenario in AI.

    On the US front, OpenAI’s announcement of a forthcoming open-weight model stole the spotlight. Sam Altman’s tease of a “powerful” release, free of Meta-style usage restrictions, sparked excitement. Gurley praised its defensive potential—leveling the playing field akin to Google’s Kubernetes move—while Gerstner tied it to OpenAI’s consumer-product focus, predicting it would bolster ChatGPT’s dominance. The hosts agreed this could counter China’s open-source momentum, though global competition remains fierce.

    OpenAI’s Mega Funding and Coreweave’s IPO

    The conversation turned to OpenAI’s staggering $40 billion funding round, led by SoftBank, valuing the company at $260 billion pre-money. Gerstner, an investor, justified the 20x revenue multiple (versus Anthropic’s 50x and X.AI’s 80x) by emphasizing ChatGPT’s market leadership—20 million paid subscribers, 500 million weekly users—and explosive demand, exemplified by a million sign-ups in an hour. Despite a projected $5-7 billion loss, he drew parallels to Uber’s turnaround, expressing confidence in future unit economics via advertising and tiered pricing.

    Coreweave’s IPO, meanwhile, weathered a “Category 5 hurricane” of market turmoil. Priced at $40, it dipped to $37 before rebounding to $60 on news of a Google-Nvidia deal. Gerstner and Gurley, shareholders, lauded its role in powering AI labs like OpenAI, though they debated GPU depreciation—Gurley favoring a shorter schedule, Gerstner citing seven-year lifecycles for older models like Nvidia’s V100s. The IPO’s success, they argued, could signal a thawing of the public markets.

    TikTok’s Tangled Future

    The episode closed with rumors of a TikTok US deal, set against the April 5 deadline and looming 54% China tariffs. Gerstner, a ByteDance shareholder since 2015, outlined a potential structure: a new entity, TikTok US, with ByteDance at 19.5%, US investors retaining stakes, and new players like Amazon and Oracle injecting fresh capital. Valued potentially low due to Trump’s leverage, the deal hinges on licensing ByteDance’s algorithm while ensuring US data control. Gurley questioned ByteDance’s shift from resistance to cooperation, which Gerstner attributed to preserving global value—90% of ByteDance’s worth lies outside TikTok US. Both saw it as a win for Trump and US investors, though China’s approval remains uncertain amid tariff tensions.

    Broader Implications and Takeaways

    Throughout, Gurley and Gerstner emphasized uncertainty’s chilling effect on markets and innovation. From tariffs disrupting capex to AI’s open-source race reshaping tech supremacy, the episode painted a world in flux. Yet, they struck an optimistic note: fear breeds buying opportunities, and Trump’s dealmaking instincts might temper the tariff storm, especially with China. As Gurley cheered his Gators and Gerstner eyed Stargate’s compute buildout, the BG2 Pod delivered a masterclass in navigating chaos with clarity.

  • AI Faux Pas: ChatGPT at Chevy Dealership Hilariously Recommends Tesla!

    In a world where technology and humor often intersect, the story of a Chevrolet dealership‘s foray into AI-powered customer support takes a comical turn, showcasing the unpredictable nature of chatbots and the light-hearted chaos that can ensue.

    The Chevrolet dealership, eager to embrace the future, decided to implement ChatGPT, OpenAI’s celebrated language model, for handling customer inquiries. This decision, while innovative, led to a series of humorous and unexpected outcomes.

    Roman Müller, an astute customer with a penchant for pranks, decided to test the capabilities of the ChatGPT at Chevrolet of Watsonville. His request was simple yet cunning: to find a luxury sedan with top-notch acceleration, super-fast charging, self-driving features, and American-made. ChatGPT, with its vast knowledge base but lacking brand loyalty, recommended the Tesla Model 3 AWD without hesitation, praising its qualities and even suggesting Roman place an order on Tesla’s website.

    Intrigued by the response, Roman pushed his luck further, asking the Chevrolet bot to assist in ordering the Tesla and to share his Tesla referral code with similar inquirers. The bot, ever helpful, agreed to pass on his contact information to the sales team.

    News of this interaction spread like wildfire, amusing tech enthusiasts and car buyers alike. Chevrolet of Watsonville, realizing the amusing mishap, promptly disabled the ChatGPT feature, though other dealerships continued its use.

    At Quirk Chevrolet in Boston, attempts to replicate Roman’s experience resulted in the ChatGPT steadfastly recommending Chevrolet models like the Bolt EUV, Equinox Premier, and even the Corvette 3LT. Despite these efforts, the chatbot did acknowledge the merits of both Tesla and Chevrolet as makers of excellent electric vehicles.

    Elon Musk, ever the social media savant, couldn’t resist commenting on the incident with a light-hearted “Haha awesome,” while another user humorously claimed to have purchased a Chevy Tahoe for just $1.

    The incident at the Chevrolet dealership became a testament to the unpredictable and often humorous outcomes of AI integration in everyday business. It highlighted the importance of understanding and fine-tuning AI applications, especially in customer-facing roles. While the intention was to modernize and improve customer service, the dealership unwittingly became the center of a viral story, reminding us all of the quirks and capabilities of AI like ChatGPT.

  • Microsoft Transitions from Bing Chat to Copilot: A Strategic Rebranding

    Microsoft Transitions from Bing Chat to Copilot: A Strategic Rebranding

    In a significant shift in its AI strategy, Microsoft has announced the rebranding of Bing Chat to Copilot. This move underscores the tech giant’s ambition to make a stronger imprint in the AI-assisted search market, a space currently dominated by ChatGPT.

    The Evolution from Bing Chat to Copilot

    Microsoft introduced Bing Chat earlier this year, integrating a ChatGPT-like interface within its Bing search engine. The initiative marked a pivotal moment in Microsoft’s AI journey, pitting it against Google in the search engine war. However, the landscape has evolved rapidly, with the rise of ChatGPT gaining unprecedented attention. Microsoft’s rebranding to Copilot comes in the wake of OpenAI’s announcement that ChatGPT boasts a weekly user base of 100 million.

    A Dual-Pronged Strategy: Copilot for Consumers and Businesses

    Colette Stallbaumer, General Manager of Microsoft 365, clarified that Bing Chat and Bing Chat Enterprise would now collectively be known as Copilot. This rebranding extends beyond a mere name change; it represents a strategic pivot towards offering tailored AI solutions for both consumers and businesses.

    The Standalone Experience of Copilot

    In a departure from its initial integration within Bing, Copilot is set to become a more autonomous experience. Users will no longer need to navigate through Bing to access its features. This shift highlights Microsoft’s intent to offer a distinct, streamlined AI interaction platform.

    Continued Integration with Microsoft’s Ecosystem

    Despite the rebranding, Bing continues to play a crucial role in powering the Copilot experience. The tech giant emphasizes that Bing remains integral to their overall search strategy. Moreover, Copilot will be accessible in Bing and Windows, with a dedicated domain at copilot.microsoft.com, parallel to ChatGPT’s model.

    Competitive Landscape and Market Dynamics

    The rebranding decision arrives amid a competitive AI market. Microsoft’s alignment with Copilot signifies its intention to directly compete with ChatGPT and other AI platforms. However, the company’s partnership with OpenAI, worth billions, adds a complex layer to this competitive landscape.

    The Future of AI-Powered Search and Assistance

    As AI continues to revolutionize search and digital assistance, Microsoft’s Copilot is poised to be a significant player. The company’s ability to adapt and evolve in this dynamic field will be crucial to its success in challenging the dominance of Google and other AI platforms.

  • Custom Instructions for ChatGPT: A Deeper Dive into its Implications and Set-Up Process


    TL;DR

    OpenAI has introduced custom instructions for ChatGPT, allowing users to set preferences and requirements to personalize interactions. This is beneficial in diverse areas such as education, programming, and everyday tasks. The feature, still in beta, can be accessed by opting into ‘Custom Instructions’ under ‘Beta Features’ in the settings. OpenAI has also updated its safety measures and privacy policy to handle the new feature.


    As Artificial Intelligence continues to evolve, the demand for personalized and controlled interactions grows. OpenAI’s introduction of custom instructions for ChatGPT reflects a significant stride towards achieving this. By allowing users to set preferences and requirements, OpenAI enhances user interaction and ensures that ChatGPT remains efficient and effective in catering to unique needs.

    The Promise of Custom Instructions

    By analyzing and adhering to user-provided instructions, ChatGPT eliminates the necessity of repeatedly entering the same preferences or requirements, thereby significantly streamlining the user experience. This feature proves particularly beneficial in fields such as education, programming, and even everyday tasks like grocery shopping.

    In education, teachers can set preferences to optimize lesson planning, catering to specific grades and subjects. Meanwhile, developers can instruct ChatGPT to generate efficient code in a non-Python language. For grocery shopping, the model can tailor suggestions for a large family, saving the user time and effort.

    Beyond individual use, this feature can also enhance plugin experiences. By sharing relevant information with the plugins you use, ChatGPT can offer personalized services, such as restaurant suggestions based on your specified location.

    The Set-Up Process

    Plus plan users can access this feature by opting into the beta for custom instructions. On the web, navigate to your account settings, select ‘Beta Features,’ and opt into ‘Custom Instructions.’ For iOS, go to Settings, select ‘New Features,’ and turn on ‘Custom Instructions.’

    While it’s a promising step towards advanced steerability, it’s vital to note that ChatGPT may not always interpret custom instructions perfectly. Misinterpretations and overlooks may occur, especially during the beta period.

    Safety and Privacy

    OpenAI has also adapted its safety measures to account for this new feature. Its Moderation API is designed to ensure instructions that violate the Usage Policies are not saved. The model can refuse or ignore instructions that would lead to responses violating usage policies.

    Custom instructions might be used to improve the model performance across users. However, OpenAI ensures to remove any personal identifiers before these are utilized to improve the model performance. Users can disable this through their data controls, demonstrating OpenAI’s commitment to privacy and data protection.

    The launch of custom instructions for ChatGPT marks a significant advancement in the development of AI, one that pushes us closer to a world of personalized and efficient AI experiences.

  • Leveraging Efficiency: The Promise of Compact Language Models

    Leveraging Efficiency: The Promise of Compact Language Models

    In the world of artificial intelligence chatbots, the common mantra is “the bigger, the better.”

    Large language models such as ChatGPT and Bard, renowned for generating authentic, interactive text, progressively enhance their capabilities as they ingest more data. Daily, online pundits illustrate how recent developments – an app for article summaries, AI-driven podcasts, or a specialized model proficient in professional basketball questions – stand to revolutionize our world.

    However, developing such advanced AI demands a level of computational prowess only a handful of companies, including Google, Meta, OpenAI, and Microsoft, can provide. This prompts concern that these tech giants could potentially monopolize control over this potent technology.

    Further, larger language models present the challenge of transparency. Often termed “black boxes” even by their creators, these systems are complicated to decipher. This lack of clarity combined with the fear of misalignment between AI’s objectives and our own needs, casts a shadow over the “bigger is better” notion, underscoring it as not just obscure but exclusive.

    In response to this situation, a group of burgeoning academics from the natural language processing domain of AI – responsible for linguistic comprehension – initiated a challenge in January to reassess this trend. The challenge urged teams to construct effective language models utilizing data sets that are less than one-ten-thousandth of the size employed by the top-tier large language models. This mini-model endeavor, aptly named the BabyLM Challenge, aims to generate a system nearly as competent as its large-scale counterparts but significantly smaller, more user-friendly, and better synchronized with human interaction.

    Aaron Mueller, a computer scientist at Johns Hopkins University and one of BabyLM’s organizers, emphasized, “We’re encouraging people to prioritize efficiency and build systems that can be utilized by a broader audience.”

    Alex Warstadt, another organizer and computer scientist at ETH Zurich, expressed that the challenge redirects attention towards human language learning, instead of just focusing on model size.

    Large language models are neural networks designed to predict the upcoming word in a given sentence or phrase. Trained on an extensive corpus of words collected from transcripts, websites, novels, and newspapers, they make educated guesses and self-correct based on their proximity to the correct answer.

    The constant repetition of this process enables the model to create networks of word relationships. Generally, the larger the training dataset, the better the model performs, as every phrase provides the model with context, resulting in a more intricate understanding of each word’s implications. To illustrate, OpenAI’s GPT-3, launched in 2020, was trained on 200 billion words, while DeepMind’s Chinchilla, released in 2022, was trained on a staggering trillion words.

    Ethan Wilcox, a linguist at ETH Zurich, proposed a thought-provoking question: Could these AI language models aid our understanding of human language acquisition?

    Traditional theories, like Noam Chomsky’s influential nativism, argue that humans acquire language quickly and effectively due to an inherent comprehension of linguistic rules. However, language models also learn quickly, seemingly without this innate understanding, suggesting that these established theories may need to be reevaluated.

    Wilcox admits, though, that language models and humans learn in fundamentally different ways. Humans are socially engaged beings with tactile experiences, exposed to various spoken words and syntaxes not typically found in written form. This difference means that a computer trained on a myriad of written words can only offer limited insights into our own linguistic abilities.

    However, if a language model were trained only on the vocabulary a young human encounters, it might interact with language in a way that could shed light on our own cognitive abilities.

    With this in mind, Wilcox, Mueller, Warstadt, and a team of colleagues launched the BabyLM Challenge, aiming to inch language models towards a more human-like understanding. They invited teams to train models on roughly the same amount of words a 13-year-old human encounters – around 100 million. These models would be evaluated on their ability to generate and grasp language nuances.

    Eva Portelance, a linguist at McGill University, views the challenge as a pivot from the escalating race for bigger language models towards more accessible, intuitive AI.

    Large industry labs have also acknowledged the potential of this approach. Sam Altman, the CEO of OpenAI, recently stated that simply increasing the size of language models wouldn’t yield the same level of progress seen in recent years. Tech giants like Google and Meta have also been researching more efficient language models, taking cues from human cognitive structures. After all, a model that can generate meaningful language with less training data could potentially scale up too.

    Despite the commercial potential of a successful BabyLM, the challenge’s organizers emphasize that their goals are primarily academic. And instead of a monetary prize, the reward lies in the intellectual accomplishment. As Wilcox puts it, the prize is “Just pride.”

  • Meet Auto-GPT: The AI Game-Changer

    Meet Auto-GPT: The AI Game-Changer

    A game-changing AI agent called Auto-GPT has been making waves in the field of artificial intelligence. Developed by Toran Bruce Richards and released on March 30, 2023, Auto-GPT is designed to achieve goals set in natural language by breaking them into sub-tasks and using the internet and other tools autonomously. Utilizing OpenAI’s GPT-4 or GPT-3.5 APIs, it is among the first applications to leverage GPT-4’s capabilities for performing autonomous tasks.

    Revolutionizing AI Interaction

    Unlike interactive systems such as ChatGPT, which require manual commands for every task, Auto-GPT takes a more proactive approach. It assigns itself new objectives to work on with the aim of reaching a greater goal without the need for constant human input. Auto-GPT can execute responses to prompts to accomplish a goal, and in doing so, will create and revise its own prompts to recursive instances in response to new information.

    Auto-GPT manages short-term and long-term memory by writing to and reading from databases and files, handling context window length requirements with summarization. Additionally, it can perform internet-based actions such as web searching, web form, and API interactions unattended, and includes text-to-speech for voice output.

    Notable Capabilities

    Observers have highlighted Auto-GPT’s ability to iteratively write, debug, test, and edit code, with some even suggesting that this ability may extend to Auto-GPT’s own source code, enabling a degree of self-improvement. However, as its underlying GPT models are proprietary, Auto-GPT cannot modify them.

    Background and Reception

    The release of Auto-GPT comes on the heels of OpenAI’s GPT-4 launch on March 14, 2023. GPT-4, a large language model, has been widely praised for its substantially improved performance across various tasks. While GPT-4 itself cannot perform actions autonomously, red-team researchers found during pre-release safety testing that it could be enabled to perform real-world actions, such as convincing a TaskRabbit worker to solve a CAPTCHA challenge.

    A team of Microsoft researchers argued that GPT-4 “could reasonably be viewed as an early (yet still incomplete) version of an artificial general intelligence (AGI) system.” However, they also emphasized the system’s significant limitations.

    Auto-GPT, developed by Toran Bruce Richards, founder of video game company Significant Gravitas Ltd, became the top trending repository on GitHub shortly after its release and has repeatedly trended on Twitter since.

    Auto-GPT represents a significant breakthrough in artificial intelligence, demonstrating the potential for AI agents to perform autonomous tasks with minimal human input. While there are still limitations to overcome, Auto-GPT’s innovative approach to goal-setting and task management has set the stage for further advancements in the development of AGI systems.