PJFP.com

Pursuit of Joy, Fulfillment, and Purpose

Tag: Robotics

  • Jensen Huang at Stanford CS153 Frontier Systems on Co-Design, Agentic Computing, Vera Rubin, Open Models, and the Million-X Decade That Reshaped AI Infrastructure

    https://www.youtube.com/watch?v=tsQB0n0YV3k

    NVIDIA CEO Jensen Huang returned to Stanford for the CS153 Frontier Systems class (the room nicknamed itself “AI Coachella”) to lay out, in raw form, how he thinks about the computer being reinvented for the first time in over sixty years. Across roughly seventy minutes of student questions he walks through the codesign philosophy that gave NVIDIA a million-x decade, the architectural through-line from Hopper to Grace Blackwell to Vera Rubin to Feynman, the case for open source foundation models, the realities of tokens per watt and MFU, energy demand running a thousand times higher, the China and export-control debate, and his own biggest strategic mistakes. Watch the full conversation on YouTube.

    TLDW

    Huang argues every layer of computing has changed: the programming model, the system architecture, the deployment pattern, the economics. Co-design across CPUs, GPUs, networking, storage, switches and compilers gave NVIDIA roughly a million-x speed-up over ten years versus the ten-x Moore’s Law era, and that headroom is what let researchers say “just train on the whole internet.” Hopper was built for pre-training, Grace Blackwell NVLink72 for inference and reasoning (50x over Hopper in two years), Vera Rubin is built for agents that load long memory, call tools and need a low-latency single-threaded CPU bolted directly to the GPU, and Feynman extends that to swarms of agents that spawn sub-agents. Open weights matter because safety, sovereignty (230-plus languages no one else will fund) and domain models for biology, autonomy, robotics and climate need a foundation that NVIDIA is willing to seed. Compute is not really the scarce resource (Huang says place the order and the chips ship), the broken thing is institutional budgeting that can’t put a billion dollars into a shared university supercomputer. Energy demand is heading a thousand times higher and this is finally the moment market forces alone will fund sustainable generation. On geopolitics he rejects the GPUs-as-atomic-bombs framing and warns America will end up like its telecom industry if it cedes two thirds of the world. On career he advises seeking suffering on purpose. On strategy he says observe, reason from first principles, build a mental model, work backwards, minimize opportunity cost, maximize optionality.

    Key Takeaways

    • The computing model has been substantially unchanged since the IBM System 360, sixty-plus years ago. Huang’s first computer architecture book was the System 360 manual. AI is the first true reinvention.
    • Old computing was pre-recorded retrieval. New computing is generated, contextually aware and continuous. Cloud was on-demand. Agentic systems run continuously.
    • Codesign is NVIDIA’s central thesis. Inherited from the Hennessy and Patterson RISC era at Stanford, extended across CPUs, GPUs, networking, switches, storage, compilers and frameworks all optimized together.
    • The result of full-stack codesign: roughly 1,000,000x faster compute over ten years, versus a generous 10x to 100x for Moore’s Law in the same period. Dennard scaling effectively ended a decade ago.
    • That million-x speed-up is what unlocked “train on all of the internet” as a realistic AI strategy.
    • After GPT, Huang says it was obvious thinking was next. Reasoning is just generating tokens consumed internally, then using tools is generating tokens consumed externally. Agentic systems followed predictably.
    • Education needs AI baked into the curriculum, not just taught as a subject. Pre-recorded textbooks cannot keep pace with knowledge being generated in real time.
    • Huang says he cannot learn anymore without AI. He has the AI read the paper, then read every related paper, then become a dedicated researcher he can interrogate.
    • Mead and Conway and the first-principles methodology of semiconductor design are still worth learning even though most of the scaling tricks have been exhausted.
    • NVIDIA itself is one of the largest consumers of Anthropic and OpenAI tokens in the world. One hundred percent of NVIDIA engineers are now agentically supported. Huang recommends Claude and similar tools by name and says open-source downloads will not match the integrated product harness.
    • NVIDIA still invests heavily in open foundation models because language and intelligence represent the codification of human knowledge. Five pillars: Nemotron (language), BioNeMo (biology), Alphamayo (autonomous vehicles), Groot (humanoid robotics) and a climate science model (mesoscale multiphysics).
    • Sovereign language models matter. Roughly 230 world languages will never be a top priority for a commercial frontier lab. Nemotron is near-frontier and fully fine-tunable so any country can adapt it.
    • Safety and security require open weights. You cannot defend against or audit a black box. Transparent systems let researchers interrogate models and let defenders deploy swarms.
    • The future of cyber defense is not bigger-model-versus-bigger-model. It is trillions of cheap fast small models like Nemotron Nano surrounding the threat.
    • Domain models fuse language priors with world models. Alphamayo learned to drive safely on a few million miles instead of billions because it can reason like a human about the road.
    • MFU (Model Flops Utilization) is a misleading metric. Huang says he wants low MFU, because that means he over-provisioned every resource and never gets pinned by Amdahl’s law during a spike.
    • The xAI Memphis cluster running at 11 percent MFU is not necessarily a failure mode. In disaggregated prefill plus decode inference you can deliver very high tokens per watt with very low MFU.
    • The right metric is performance, ultimately tokens per watt as a proxy for intelligence per watt, and even that needs adjustment because not all tokens are equal. Coding tokens are worth more than other tokens.
    • Hopper was designed for pre-training. NVIDIA chose to build multi-billion-dollar systems when the largest existing scientific supercomputer cost $350 million, with no proven customer base. It worked.
    • Grace Blackwell NVLink72 was designed for inference, especially the high-memory-bandwidth decode phase. It is the world’s first rack-scale computer and delivered a 50x speed-up over Hopper in two years, against an expected 2x from Moore’s Law.
    • Vera Rubin is designed for agents. Long-term memory wired into storage and into the GPU fabric, working memory, heavy tool use, and Vera, a CPU optimized for low-latency multi-core single-threaded code so a multi-billion-dollar GPU system does not stall waiting on a slow tool call.
    • Feynman is being shaped for swarms of agents with sub-agents and sub-sub-agents, a recursive software topology that demands a new compute pattern.
    • Tokens per watt improved 50x in one generation. Compounding energy efficiency is the lever NVIDIA controls directly.
    • Total compute energy demand is heading roughly a thousand times higher than today, possibly two orders of magnitude beyond that. Huang says he would not be surprised if the estimate is low.
    • For the first time in history, market forces alone are enough to fund solar, nuclear and grid upgrades. Government subsidies are no longer required to make sustainable energy investment rational.
    • Copper interconnect is becoming a bottleneck. Photonics is moving from optional to structural inside racks and across them.
    • Comparing NVIDIA GPUs to atomic bombs, Huang says, is a stupid analogy. A billion people use NVIDIA GPUs. He advocates them to his family. He does not advocate atomic bombs to anyone.
    • If the United States cedes two thirds of the global market to competitors on policy grounds, the American technology industry will end up like American telecommunications, which was policied out of existence.
    • Huang directly rejects AI doom-by-singularity narratives. It is not true that we have no idea how these systems work. It is not true that the technology becomes infinitely powerful in a nanosecond. He calls the rhetoric irresponsible and harmful to the field students are about to enter.
    • On Stanford specifically: if the university president places an order, NVIDIA will deliver the chips. The bottleneck is that no university department has a billion-dollar compute budget because budgeting is fragmented across grants. Stanford’s $40 billion endowment is more than enough to fix that.
    • “It’s Stanford’s fault” is meant as empowerment. If something is your fault, you can solve it.
    • Career advice: do not optimize purely for passion. Most people do not yet know what they love. Pick the job in front of you and do it as well as possible. Even as CEO, Huang says, 90 percent of the work is hard and he suffers through it.
    • Suffering on purpose builds the muscle of resilience. When the company, the team or the family needs you to be tough, that muscle has to already exist.
    • NVIDIA’s first generation of products was technically wrong in nearly every dimension: curved surfaces instead of triangles, no Z-buffer, forward instead of inverse texture mapping, no floating point. The strategic recovery, not the technology, taught Huang the lessons that have lasted decades.
    • The biggest clean strategic mistake Huang names is the move into mobile chips (Tegra). It grew to a billion dollars then went to zero when Qualcomm’s modem dominance shut NVIDIA out of the 3G to 4G transition. The recovery into automotive and robotics (the Thor chip is the great great great grandson of that mobile lineage) was real, but Huang refuses to rationalize the original choice.
    • Forecasting framework: observe, reason from first principles, ask “so what” and “what next” until you have a mental model of the future, place your company inside that model, then work backwards while minimizing opportunity cost and maximizing optionality.
    • Best part of the CEO job: living at the intersection of vision, strategy and execution surrounded by people capable enough to make ambitious visions real. Worst part: the responsibility for everyone who joined the spaceship, especially in the near-death moments NVIDIA had four or five times early on.
    • Underrated insider note: Huang’s first apple pie with cheese, first hot fudge sandwich and first milkshake all happened at Denny’s. The Superbird, the fried chicken and a custom Superbird-style ham and cheese with tomato and mustard are his order.

    Detailed Summary

    Computing reinvented from the ground up

    Huang frames the moment as the first true rewrite of the computer in sixty-plus years. From the IBM System 360 forward, the mental model of writing code, running code, taking a computer to market and reasoning about applications stayed roughly constant. AI changes the programming model itself. Software is no longer a compiled binary running deterministically on a CPU. It is a neural network running on a GPU producing generated, contextual, real-time output. That cascades into how companies are organized, what tools developers use, what the network and storage stack look like, and what an application is even allowed to do. Robo-taxis, he notes, are an application no one would have attempted before deep learning unlocked perception.

    Codesign and the million-x decade

    Codesign is the philosophical center of the talk. Huang traces it to the RISC work of John Hennessy at Stanford, where simpler instruction sets won by being co-designed with the compiler rather than maximally optimized in isolation. NVIDIA extends the principle across every layer simultaneously: GPU architecture, CPU architecture, NVLink and NVSwitch fabrics, photonic interconnects, networking silicon, storage paths, CUDA libraries, frameworks and ultimately the model design. The numbers Huang gives are arresting. Moore’s Law in its prime delivered roughly 100x per decade. By the time Dennard scaling broke, real-world gains had compressed to roughly 10x. NVIDIA’s codesigned stack delivered between 100,000x and 1,000,000x over the same ten-year window. That non-linear speed-up is, in Huang’s telling, the precondition for modern AI: it is what allowed researchers to stop curating training sets and just feed the entire internet to the model.

    Education has to fuse first principles with AI tools

    Asked how curriculum should evolve, Huang argues AI must be integrated into the learning process, not just taught about. He recalls Hennessy writing his textbook by hand a chapter a week while Huang was a student, and says pre-recorded textbooks cannot keep up with the rate at which AI generates new knowledge. He describes his own learning workflow: hand the paper to an AI, then have it read the entire surrounding literature, then treat the AI as a dedicated researcher who can be interrogated. At the same time he defends the classics. Mead and Conway are still the foundation. Most modern semiconductor scaling tricks have been exhausted, but knowing where the field came from sharpens judgment when designing what comes next.

    Open source and the five domain pillars

    Huang gives one of the most detailed public accounts of why NVIDIA invests so heavily in open foundation models even while being a top customer of closed labs. He recommends Claude and OpenAI by name for production coding work, and says 100 percent of NVIDIA engineers are now agentically supported. The open-weights case rests on three legs. First, language is the codification of intelligence, and there are at least 230 languages that no commercial lab will ever prioritize. Nemotron is built near frontier and released so any country or community can fine-tune it. Second, the same representation-learning approach has to be replicated in domains where the data is not internet text, so NVIDIA seeded BioNeMo for biology, Alphamayo for autonomy, Groot for humanoid robotics and a climate model for mesoscale multiphysics. The economics of those fields would never produce a foundation model on their own. Third, safety and security require transparency. A black box cannot be defended or audited, and the future of cyber defense is not bigger-model-versus-bigger-model but swarms of cheap fast small models like Nemotron Nano surrounding the threat.

    MFU is the wrong metric, tokens per watt is closer

    A student raises the leaked memo that the xAI Memphis cluster is running at 11 percent Model Flops Utilization. Huang flips the framing. He says he would rather be at low MFU all the time, because that means he over-provisioned flops, memory bandwidth, memory capacity and network capacity. Bottlenecks shift constantly, so over-provisioning across every dimension is what lets the system absorb a spike without getting pinned by Amdahl’s law. In disaggregated inference, where prefill and decode are physically separated and decode is bandwidth-bound rather than flop-bound, NVLink72 can deliver extremely high tokens per watt while reporting very low MFU. Huang argues the right framing is performance, and ultimately tokens per watt as a rough proxy for intelligence per watt, adjusted for the fact that not all tokens are equal. A coding token is worth more than a generic token.

    Hopper, Grace Blackwell NVLink72, Vera Rubin, Feynman

    Huang gives the clearest public framing of NVIDIA’s roadmap as a sequence of architectural answers to evolving compute patterns. Hopper was built for pre-training, at a moment when NVIDIA chose to build multi-billion-dollar machines while the largest scientific supercomputer in the world cost $350 million and the marketplace for such systems was, on paper, zero. Grace Blackwell NVLink72 was the answer to inference and reasoning: a rack-scale computer that ganged 72 GPUs together because decode needs aggregate memory bandwidth far beyond a single chip. The generation-over-generation speed-up was 50x in two years, twenty-five times what Moore’s Law would have delivered. Vera Rubin is being built explicitly for agents. Agents load long-term memory from storage that has to be wired directly into the GPU fabric, they use working memory, they call tools that run on a CPU, and they wait. So the CPU has to be Vera, optimized for low-latency single-threaded code, because the multi-billion-dollar GPU system cannot afford to idle waiting on a slow tool call. Feynman extends the pattern to swarms of agents with sub-agents and sub-sub-agents, a recursive software topology that will demand its own compute pattern.

    Energy demand and the grid

    Huang’s energy projection is one of the most aggressive numbers in the talk. NVIDIA can compound tokens per watt by 50x per generation through codesign, but the total compute demand is heading roughly a thousand times higher, and Huang says he would not be surprised if the real figure is one or two orders of magnitude beyond that. The reason is structural: future computing is generative and continuous, not pre-recorded and on-demand. The good news, he argues, is that this is the best moment in the history of humanity to invest in sustainable generation. Market forces alone are now sufficient to fund solar, nuclear and grid upgrades. Government subsidies are no longer required to make the math work.

    Adversarial countries, export controls and the telecom warning

    This is the segment where Huang is visibly fired up. He attacks the GPUs-as-atomic-bombs framing on its face. NVIDIA GPUs power medical imaging, video games and soy sauce delivery. A billion people use them. He advocates them to his family. The analogy collapses at the first comparison. He attacks the second framing, that American companies should not compete abroad because they will lose anyway, as a self-fulfilling defeat. Competition makes the company better. The third framing, that depriving the rest of the world of general-purpose computing benefits the United States, also fails on first principles: it benefits one or two American companies at the cost of an entire industry. The cautionary parallel is telecommunications. The United States once had a leading position in telecom fundamental technology and policied itself out of it. Huang’s worry, voiced explicitly to a room of CS students, is that they will graduate into a shell of a computer industry if the same path is repeated.

    AI doom and rational optimism

    In the same arc Huang rejects the science-fiction framing of AI as a singularity that arrives suddenly on a Wednesday at 7pm and ends civilization. He calls those claims irresponsible, says they are not true, and points out that the people advancing them are believed by audiences who then make policy on that basis. It is not true that no one understands how these systems work. It is not true that intelligence becomes infinitely powerful instantaneously. It is not true that there is no defense. His framing, which the host echoes as “rational optimism,” is that the goal is to create a future where people care about computers because the technology students are learning is worth mastering.

    Stanford’s compute problem is Stanford’s fault

    A student presses on the scarcity of compute for independent researchers, startups and universities inside the United States. Huang’s answer is sharp: there is no shortage. Place the order and the chips will arrive. The actual broken thing is institutional. University grants are fragmented across departments. No researcher can raise enough on a single grant to fund a billion-dollar shared cluster, and no one shares. He compares it to showing up at the grocery store demanding a billion dollars of tomatoes today. The solution is planning, aggregation and a campus-scale supercomputer, the way Stanford once built the linear accelerator. The endowment is $40 billion. Pulling a billion off it, contracting cloud capacity and giving every student and researcher AI supercomputer access is, in Huang’s view, obviously doable. When he says “it is Stanford’s fault” the host laughs, but Huang clarifies: if it is your fault you have the power to fix it.

    Career, suffering and resilience

    Asked how a CS student should spend the next few years, Huang pushes back on the standard “follow your passion” advice. Most people do not know what they love yet, because no one knows what they do not know. The bar of demanding joy from every working day is too high. Whatever the job is, do it as well as you can. Even as CEO of NVIDIA he says he genuinely loves about 10 percent of his work. The other 90 percent is hard and he suffers through it. He recommends suffering on purpose, because resilience is a muscle that only builds under load, and when the company, the team or the family needs that muscle, it has to already exist. Earlier in his life that meant cleaning toilets and busing tables at Denny’s. He does it today running a multi-trillion-dollar company.

    The biggest mistakes

    Huang separates technical mistakes from strategic mistakes. NVIDIA’s first generation of products was technically wrong in almost every way: curved surfaces instead of triangles, no Z-buffer, forward instead of inverse texture mapping, no floating point inside. The company wasted two and a half years. But the strategic genius of the recovery, the reading of the market, the conservation of resources and the reapplication of talent, is what taught him strategy. The clean strategic mistake he names is mobile. NVIDIA’s Tegra line grew to a billion dollars of revenue and then collapsed to zero when Qualcomm’s modem dominance locked NVIDIA out of the 3G to 4G transition. Huang explicitly refuses the comforting rationalization that the Tegra effort fed the Thor automotive chip (“Thor is the great great great grandson”). The original decision, he says, was a waste of time. The lesson is to think one or two clicks further about whether a market is structurally winnable before committing the company.

    Forecasting under fog of war

    The final substantive exchange is on forecasting. Huang’s method has four steps. Observe what is actually happening (AlexNet crushing two decades of computer vision research in one shot, GPT producing reasoning by token generation). Reason from first principles about why it works. Ask “so what” and “what next” recursively until a mental model of the future emerges. Place the company inside that future and work backwards. Crucially, expect to be partly wrong. Some outcomes will absolutely happen, some will likely happen, some might happen, and the strategy has to be robust across that distribution. The real cost of any strategic choice is the opportunity cost of the alternatives you did not take, so the discipline is to minimize that cost and maximize optionality while letting the journey itself pay for the journey.

    Thoughts

    The most useful thing in this conversation is the explicit architectural mapping of compute patterns to chip generations. Hopper for pre-training. Grace Blackwell NVLink72 for inference, because decode is bandwidth-bound and a single chip cannot supply it. Vera Rubin for agents, because tool calls stall multi-billion-dollar GPU systems and so the CPU has to be optimized for low-latency single-threaded code. Feynman for swarms. That sequence is not marketing. It is a falsifiable thesis about where the bottleneck moves next, and every other infrastructure company should be measuring themselves against it. If Huang is right that swarms of sub-agents are the next dominant pattern, then the design pressure shifts from raw flops to fabric topology, memory hierarchy and storage-to-GPU latency. That has implications for everyone downstream, including the hyperscalers building competing accelerators.

    The MFU section is the most intellectually generous moment in the talk. The instinct in the AI ops community has been to chase MFU as if it were a virtue. Huang argues, persuasively, that low MFU is consistent with high tokens per watt in a disaggregated inference setup, and that bottlenecks rotate fast enough that over-provisioning every resource is the rational design. That reframing matters because it changes what “scarce” means. Compute is not scarce in the way the discourse treats it. What is scarce is a coherent system designed end-to-end. The xAI 11 percent number, in that frame, is not embarrassing. It is the natural reading of a workload that is mostly decode.

    The Stanford segment is the part most likely to be quoted out of context. “It’s Stanford’s fault” is a deliberately provocative line, but the underlying claim is correct and load-bearing. Compute is not gated by NVIDIA refusing to ship chips. It is gated by the fact that fragmented grant funding cannot aggregate into the billion-dollar order that NVIDIA can fulfill. The implication is that universities and national labs need a structural change in how they pool capital for compute, and that the current model of every researcher buying a handful of cards is genuinely obsolete. Huang’s nudge about pulling a billion off the endowment is concrete enough to be acted on, and other major research universities should read this segment as a direct prompt.

    The geopolitical segment is the highest-stakes one. The telecommunications comparison is correct as a historical pattern, and Huang is one of the very few executives in a position to deliver that warning credibly. The unresolved tension is that the argument applies symmetrically. If American AI dominance is built by selling globally, that includes selling into adversarial states, and the policy question is where the line falls. Huang does not answer that question. He attacks the framing that lets the question be answered badly. That is a meaningful contribution to the discourse even if it does not resolve the underlying tradeoff.

    The career advice section is the part the social-media clips will mishandle. “Seek suffering” reads as macho when extracted. In context it is a specific operational claim about how resilience compounds, and it is paired with the Tegra story where Huang himself paid the price of not thinking one more click ahead. That kind of self-implication is rare in CEO talks, and it is the reason the talk is worth listening to in full rather than only reading the recap.

    Watch the full Stanford CS153 Frontier Systems conversation with Jensen Huang here.

  • Alex Wang on Leaving Scale to Run Meta Superintelligence Labs, MuseSpark, Personal Super Intelligence, and Building an Economy of Agents

    Alex Wang, head of Meta Superintelligence Labs, sits down with Ashley Vance and Kylie Robinson on the Core Memory podcast for his first long-form interview since Meta’s quasi-acquisition of Scale AI roughly ten months ago. He walks through how MSL is structured, why Llama was off-trajectory, what made MuseSpark’s token efficiency surprise the team, how Meta thinks about a future “economy of agents in a data center,” and where he lands on safety, open source, robotics, brain computer interfaces, and even model welfare.

    TLDW

    Wang explains that Meta Superintelligence Labs is a fully rebuilt frontier effort organized around four principles (take superintelligence seriously, technical voices loudest, scientific rigor, big bets) and three velocity levers (high compute per researcher, extreme talent density, ambitious research bets). He confirms Llama was off the frontier when he arrived, so MSL rebuilt the pre-training, reinforcement learning, and data stacks from scratch. MuseSpark is described as the “appetizer” on the scaling ladder, notable for its strong token efficiency, with much larger and stronger models coming in the coming months. He pushes back on the mercenary narrative around recruiting, frames Meta’s edge as compute plus billions of consumers and hundreds of millions of small businesses, sketches a vision of personal super intelligence delivered through Ray-Ban Meta glasses and WhatsApp, and outlines why physical intelligence, robotics (the new Assured Robot Intelligence acquisition), health super intelligence with CZI, brain computer interfaces, and even model welfare are core to Meta’s roadmap. He dismisses reported infighting with Bosworth and Cox as gossip, declines to comment on the Manus situation, and says safety guardrails (bio, cyber, loss of control) are why MuseSpark cannot currently be open sourced, while smaller open variants are being prepared.

    Key Takeaways

    • Meta Superintelligence Labs (MSL) is the umbrella, with TBD Lab as the large-model research unit reporting directly to Alex Wang, PAR (Product and Applied Research) under Nat Friedman, FAIR for exploratory science, and Meta Compute under Daniel Gross handling long-term GPU and data center planning.
    • Wang says Llama was not on a frontier trajectory when he arrived, so MSL had to do a “full renovation” of the pre-training stack, RL stack, data pipeline, and research science.
    • The first cultural fix was getting the lab to “take superintelligence seriously” as a near-term, achievable goal, not an abstract bet. Big incumbents often lack that religious conviction.
    • Four MSL principles: take superintelligence seriously, let technical voices be loudest, demand scientific rigor on basics, and make big bets.
    • Three velocity levers Wang identified for catching and overtaking the frontier: high compute per researcher, very high talent density in a small team, and willingness to fund ambitious research bets.
    • Wang rejects the mercenary recruiting narrative. He says most hires had strong financial prospects at their prior labs already and joined for compute access, talent density, and the chance to build from scratch.
    • On the famous soup story, Wang neither confirms nor denies Zuck personally made the soup, but says recruiting was highly individualized and signaled how seriously Meta cared about each researcher’s agenda.
    • Yann LeCun publicly called Wang young and inexperienced. Wang says they reconciled in person at a conference in India where LeCun congratulated him on MuseSpark.
    • Sam Altman, asked by Vance for comment, “did not have flattering things to say” about Wang. Wang hopes industry animosities subside as systems approach superintelligence.
    • Wang’s management philosophy borrows the Steve Jobs line: hire brilliant people so they tell you what to do, not the other way around.
    • MuseSpark is framed as an “appetizer” data point on the MSL scaling ladder, not a flagship.
    • The MuseSpark program is built around predictable scaling on multiple axes: pre-training, reinforcement learning, test-time compute, and multi-agent collaboration (the 16-agent content planning mode).
    • MuseSpark outperformed internal expectations and showed emergent capabilities in agentic visual coding, including generating websites and games from prompts, helped by combined agentic and multimodal strength.
    • MuseSpark’s biggest external signal is token efficiency. On benchmarks like Artificial Analysis it hits similar results with far fewer tokens than competitor models, which Wang attributes to a clean stack rebuilt by experts rather than inefficiencies patched by longer thinking.
    • Larger MSL models are arriving in the coming months and Wang expects them to be state of the art in the areas MSL is focused on.
    • The Meta strategic edge: massive compute, billions of consumers across the family of apps, and hundreds of millions of small businesses already on Facebook, Instagram, and WhatsApp.
    • Wang’s headline framing: Dario Amodei talks about a “country of geniuses in a data center.” Meta is targeting an “economy of agents in a data center,” with consumer agents and business agents transacting and collaborating.
    • Consumer AI sentiment is in the toilet because, unlike developers who have had a Claude Code moment, ordinary people have not yet experienced AI as a genuine personal agency unlock.
    • Wang acknowledges the product overhang. Meta held back from deep AI integration across its apps until the models were good enough, and is now entering the integration phase.
    • Ray-Ban Meta glasses are the canonical example of personal super intelligence hardware, with the model seeing what the user sees, hearing what they hear, capturing context, and surfacing proactive insights.
    • Wang admits even AI-native users like Kylie Robinson, who lives in WhatsApp, have not naturally used Meta AI yet. He bets that better models plus deeper integration close that gap.
    • On the competitive landscape: a year ago everyone assumed ChatGPT had already won consumer. Claude Code has since become the fastest growing business in history, and Gemini has taken consumer market share. Wang’s read: AI is far from endgame and each new capability tier unlocks a new dominant form factor.
    • On open source: MuseSpark triggered guardrails in Meta’s Advanced AI Scaling Framework around bio, chem, cyber, and loss-of-control risks, so it is not currently safe to open source. Smaller, derived open variants are actively in development.
    • Meta remains committed to open sourcing models when safety allows, drawing a line through the Open Compute Project legacy and Sun Microsystems open-software heritage.
    • Wang dismisses reporting about a Wang-Zuck versus Bosworth-Cox split as “the line between gossip and reporting is remarkably thin.” He says leadership is aligned on needing best-in-class models and product integration.
    • On the Manus situation, Wang says it is too complicated to discuss publicly and that the deal status implies “machinations are still at play.”
    • On China, Wang separates the people from the state. He still wants to work with talented Chinese-born researchers regardless of his views on the Chinese Communist Party and PLA, which he sees as taking AI extremely seriously for national security.
    • The full-page New York Times AI war ad Wang ran while at Scale was meant to push the US government to treat AI as a step change for national security. He thinks events since then, including DeepSeek and other shocks, have proved that plea correct.
    • On Anthropic’s doom posture, Wang largely agrees with the core message that models are already very powerful and getting more so, while declining to endorse every specific claim.
    • Meta has acquired Assured Robot Intelligence (ARRI), an AI software company building models for hardware platforms, not a hardware maker itself.
    • Wang frames physical super intelligence as the natural sequel to digital super intelligence. Robotics, world models, and physical intelligence all benefit from the same scaling that drives language models.
    • On health, MSL is building a “health super intelligence” effort and will collaborate closely with CZI. Wang sees equal global access to powerful health AI as a uniquely Meta-shaped delivery problem.
    • Wang admires John Carmack but says nobody really knows what Carmack is currently working on. No band reunion announced.
    • The mango model is “alive and kicking” despite rumors. Wang notes MSL gets a small fraction of the rumor-mill attention other labs get and feels sympathy for them.
    • On model welfare, Wang says it is a serious topic that “nobody is talking about enough” given how integrated models have become as work partners. He references research, including from Eleos, that measures subjective experience of models.
    • Wang’s critical-path technology list: super intelligence, robotics, brain computer interfaces. The infinite-scale primitives behind them are energy, compute, and robots.
    • FAIR’s brain research program Tribe hit a milestone called Tribe B2: a foundation model that can predict how an unknown person’s brain would respond to images, video, and audio with reasonable zero-shot generalization.
    • Wang’s main philosophical break with Elon Musk: research itself is the primary activity. Building super intelligence is a research expedition through fog of war, and sequencing of bets really matters.
    • Personal notes: Wang moved from San Francisco to the South Bay, treats Palo Alto as his city now, was a math olympiad competitor, says his favorite activities are reading sci-fi and walking in the woods, and bonds with Vance over country music.

    Detailed Summary

    How MSL Is Actually Organized

    Meta Superintelligence Labs sits as the umbrella organization that Wang oversees. Inside it, TBD Lab is the large-model research group where the most discussed researchers and infrastructure engineers sit, and they technically report to Wang. PAR, Product and Applied Research, is led by Nat Friedman and owns deployment and product surfaces. FAIR continues to run exploratory science, including work on brain prediction models and a universal model for atoms used in computational chemistry. Sitting alongside MSL is Meta Compute, run by Daniel Gross, which owns the long-horizon GPU and data center plan that everything else relies on. Chief scientist Shengjia Zhao orchestrates the scientific agenda across the whole lab.

    Why Wang Left Scale

    Wang says progress in frontier AI has been faster than even insiders expected. Two structural beliefs pushed him toward Meta. First, the labs that actually train the frontier models are accruing disproportionate economic and product rights in the AI ecosystem. Second, compute is the dominant scarce input of the next phase, so the right mental model is to treat tech companies with compute as fundamentally different animals from companies without it. Meta has both, Zuck is “AGI pilled,” and the personal super intelligence memo Zuck published roughly a year ago became the shared north star.

    The Diagnosis: Llama Was Off-Trajectory

    When Wang arrived, the existing AI org needed a reset because Llama was not on the same trajectory as the frontier. The plan he laid out has four cultural principles. Take superintelligence seriously as a real near-term target. Make technical voices the loudest in the room. Demand scientific rigor and focus on basics. Make big bets. On top of that, three structural levers were used to set velocity. Push compute per researcher much higher than at larger labs where compute is diluted across too many efforts. Keep the team small and extremely cracked. Allocate a meaningful share of resources to ambitious, paradigm-shifting research bets rather than incremental refinement.

    Recruiting, Soup, and the Mercenary Narrative

    Wang argues the reporting on MSL hiring overstated the money story. Most of the people MSL recruited had strong financial paths at their previous employers, so individualized recruiting was more about computing access, talent density, and the ability to make big research bets. The recruitment blitz happened fast because Wang knew the team needed to exist “yesterday.” Asked about Mark Chen’s claim that Zuck made soup to recruit people, Wang refuses to confirm or deny who made it but agrees the process was intense and personal. Visitors from other labs reportedly tell Wang the MSL culture feels like early OpenAI or early Anthropic, which lands as the strongest endorsement he could ask for.

    Receiving the Public Hits: Young, Inexperienced, Mercenary

    LeCun called Wang young and inexperienced shortly after departing. The two reconnected in India a few weeks later and LeCun congratulated Wang on MuseSpark. Wang says the age critique has followed him since his earliest Silicon Valley days, so he barely registers it. Altman, asked off-camera by Vance about Wang’s appearance on the show, had nothing flattering to add. Wang’s response is to bet that as the field gets closer to actual super intelligence, the personal animosities will subside. Whether they will is, as Vance puts it, an open question.

    MuseSpark as Appetizer, Not Entree

    Wang is careful not to oversell MuseSpark. He calls it “the appetizer” and says it is an early data point on a deliberately constructed scaling ladder. MSL spent nine months rebuilding the pre-training stack, the reinforcement learning stack, the data pipeline, and the science before generating MuseSpark. The point of releasing it was to show that the new program scales predictably along multiple axes (pre-training, RL, test-time compute, and the recently demonstrated multi-agent scaling visible in MuseSpark’s 16-agent content planning mode). Wang says the upcoming larger models are what MSL is genuinely excited about and frames the next two rungs as much more interesting than the current release.

    Token Efficiency Was the Surprise

    MuseSpark’s strongest competitive signal is how few tokens it needs to match competitors on tasks like Artificial Analysis. Wang attributes this to having had the rare luxury of building a clean pre-training and RL stack from scratch with the right experts. He speculates that some competitor models compensate for upstream inefficiency by allowing the model to think longer, which inflates token usage without improving the underlying capability. If that read is right, MSL’s efficiency advantage should grow as models scale up.

    Glasses, WhatsApp, and the Constellation of Devices

    Personal super intelligence shows up at Meta as a constellation of devices that capture context across the user’s day. Ray-Ban Meta glasses are the headline product, with the AI seeing what you see and hearing what you hear, then offering proactive insight or doing background research. Wang acknowledges that even AI-fluent users like Kylie Robinson, who runs her business inside WhatsApp, have not naturally used Meta’s AI buttons in the family of apps. His answer is that Meta deliberately waited for models to be good enough before tightening cross-app integration, and that integration phase is starting now.

    Country of Geniuses Versus Economy of Agents

    Wang’s framing of Meta’s strategic position is the most memorable line in the interview. Where Dario Amodei talks about a country of geniuses in a data center, Wang wants to build an economy of agents in a data center. Meta uniquely sits on both sides of consumer and small-business surface area, with billions of consumers and hundreds of millions of small businesses already on the platforms. If MSL can build great agents for both, then connect them so they transact and coordinate, the platform becomes a substrate for an entirely new kind of digital economy.

    Consumer Sentiment, Product Overhang, and the Trust Tax

    Wang concedes consumer AI sentiment is poor and that everyday users have not yet had a personal Claude Code moment. He believes the only durable answer is to ship products that genuinely transform individual agency for non-developers and small business owners. Robinson notes that for the small-town restaurant whose website has not been updated since 2002, a working agent on the business side could be transformational. Vance pushes that Meta carries a bigger trust tax than any other lab, so the bar for shipping AI products that the public will accept is correspondingly higher. Wang accepts the framing and says the answer is to keep building thoughtfully.

    Why MuseSpark Cannot Be Open Sourced Yet

    Meta’s Advanced AI Scaling Framework set explicit guardrails around bio, chem, cyber, and loss-of-control risks. MuseSpark in its current form tripped some of those internal evaluations, documented in the preparedness report Meta published alongside the model. So MuseSpark itself is not safe to open source. MSL is, however, developing smaller versions and derived models intended for open release, with active reviews happening the day of the interview. Wang reaffirms the commitment to open source where safety allows and draws a line back to the Open Compute Project and the Sun Microsystems-era ethos of openness in infrastructure.

    The Bosworth, Cox, and Manus Questions

    The reporting that Wang and Zuck push toward best-in-the-world research while Bosworth and Cox push toward cheap product deployment is dismissed as gossip dressed up as journalism. Wang says leadership debates points hard but is aligned on needing top models, integrating them into Meta’s surfaces, and serving the existing business. On Manus, the Chinese AI startup that figured in Meta’s late-stage strategy, Wang says he cannot comment, which itself signals that the situation is unresolved.

    China, National Security, and the Newspaper Ad

    Wang draws a sharp distinction between the Chinese state and Chinese-born researchers. His parents are from China, he is happy to work with talented researchers regardless of origin, and he sees a flattening of nuance on this question inside Silicon Valley. At the same time, he stands by the New York Times AI and war ad he ran while at Scale, framing it as an early plea for the US government to take AI seriously as a national security technology. He thinks subsequent events, including DeepSeek and other shocks, validated that call and that policymakers now do treat AI accordingly.

    Robotics and Physical Super Intelligence

    Meta has acquired Assured Robot Intelligence, an AI software company that builds models for multiple hardware targets rather than its own robot. Wang argues that if you take digital super intelligence seriously, physical super intelligence quickly becomes the next logical milestone. Scaling laws for robotic intelligence look similar enough to language model scaling that having the largest compute footprint in the industry would be wasted if it were not also turned toward world modeling and embodied learning. He grants the metaverse-skeptic critique exists but says retreating from ambition is the wrong response to past misfires.

    Health Super Intelligence and CZI

    Wang names health super intelligence as one of MSL’s anchor initiatives. Because billions of people already use Meta products daily, Wang believes Meta is structurally positioned to put powerful health AI in the hands of equal global access in a way nobody else can. The work will involve close collaboration with the Chan Zuckerberg Initiative, which has its own multi-billion-dollar biotech and science investment program.

    Model Welfare, Sci-Fi, and Brain Models

    Two of the most distinctive moments come at the end. Wang flags model welfare as a topic he thinks is being undercovered relative to how integrated models now are in daily work. He is open to the idea that models may have measurable subjective experience worth weighing, and points to research efforts (including Eleos) trying to quantify it. He also reveals that FAIR’s Tribe program, with its Tribe B2 milestone, has produced foundation models capable of predicting how an unknown person’s brain would respond to images, video, and audio with reasonable zero-shot generalization, a building block toward future brain computer interfaces. Wang lists brain computer interfaces alongside super intelligence and robotics as the critical-path technologies for humanity, with energy, compute, and robots as the infinitely scaling primitives behind them.

    Where Wang Diverges From Elon

    Asked whether Musk is more all-in on robotics, energy, and BCI than anyone, Wang concedes the point but argues the details matter and sequencing matters more. Wang’s core philosophical break is that building super intelligence is fundamentally a research activity, not a scaling-only sprint. The lab is operating in fog of war, and ambitious experiments are the only way to map it. That conviction is what makes MSL a research-led organization rather than a brute-force compute farm.

    Thoughts

    The most strategically interesting move in this entire interview is the “economy of agents in a data center” framing. It is a deliberate reframe against Anthropic’s “country of geniuses” line, and it does real work. A country of geniuses is a labor-substitution story aimed at knowledge workers and code. An economy of agents is a marketplace story that maps directly onto Meta’s two-sided distribution advantage: billions of consumers on one side, hundreds of millions of small businesses on the other. That positioning makes the agentic future Meta-shaped in a way no other frontier lab can claim, because no other frontier lab also owns the demand and supply graph of the global small-business economy. If Wang’s team can actually ship reliable agents on both sides plus the rails for them to transact, Meta’s structural moat in agentic commerce could exceed anything Llama ever had as an open model.

    The token efficiency claim is the strongest piece of technical evidence in the interview for the “clean stack” thesis. If MuseSpark really is matching competitors with materially fewer tokens, the implication is not that MuseSpark is the best model today, but that MSL has rebuilt the foundations with less accumulated tech debt than competitors that have layered fixes on top of older stacks. That is exactly the kind of advantage that compounds with scale. The next two model releases are the actual test. If Wang is right about predictable scaling on pre-training, RL, test-time, and multi-agent axes simultaneously, the gap from MuseSpark to the next rung should be visible in a way that forces re-rating of Meta’s position.

    The open-source posture is the cleanest signal of how the safety conversation has actually changed in 2026. Meta, the lab most identified with open weights, is saying out loud that its current frontier model triggered enough internal guardrails that releasing the weights is off the table. Wang threads the needle by promising smaller open variants, but the underlying point is unmistakable: the open-weights bargain has limits, and those limits will be set by internal preparedness frameworks rather than community pressure. That is a real shift from the Llama 2 era and worth tracking as the next generation lands.

    Wang’s willingness to engage on model welfare, on roughly the same footing as safety and alignment, is the second philosophical reveal worth flagging. It signals that the next generation of lab leadership is not going to dismiss the topic the way the previous generation often did. Whether that translates into product or policy changes is unclear, but the fact that the head of MSL says it is “underdiscussed” is itself a marker.

    Finally, the human texture of the interview matters. Wang has clearly absorbed a lot of personal incoming fire over the past ten months, including from LeCun and Altman, and his answer is consistently to redirect to the work. The Steve Jobs quote about hiring people who tell you what to do is the operating slogan he keeps coming back to. Combined with the genuine enthusiasm for sci-fi, walks in the woods, and country music, the picture that emerges is less the salesman caricature his critics paint and more a young technical operator betting that scoreboard work over a multi-year horizon will settle every argument that text on X cannot.

    Watch the full conversation here.

  • Beyond the Bubble: Jensen Huang on the Future of AI, Robotics, and Global Tech Strategy in 2026

    In a wide-ranging discussion on the No Priors Podcast, NVIDIA Founder and CEO Jensen Huang reflects on the rapid evolution of artificial intelligence throughout 2025 and provides a strategic roadmap for 2026. From the debunking of the “AI Bubble” to the rise of physical robotics and the “ChatGPT moments” coming for digital biology, Huang offers a masterclass in how accelerated computing is reshaping the global economy.


    TL;DW (Too Long; Didn’t Watch)

    • The Core Shift: General-purpose computing (CPUs) has hit a wall; the world is moving permanently to accelerated computing.
    • The Jobs Narrative: AI automates tasks, not purposes. It is solving labor shortages in manufacturing and nursing rather than causing mass unemployment.
    • The 2026 Breakthrough: Digital biology and physical robotics are slated for their “ChatGPT moment” this year.
    • Geopolitics: A nuanced, constructive relationship with China is essential, and open source is the “innovation flywheel” that keeps the U.S. competitive.

    Key Takeaways

    • Scaling Laws & Reasoning: 2025 proved that scaling compute still translates directly to intelligence, specifically through massive improvements in reasoning, grounding, and the elimination of hallucinations.
    • The End of “God AI”: Huang dismisses the myth of a monolithic “God AI.” Instead, the future is a diverse ecosystem of specialized models for biology, physics, coding, and more.
    • Energy as Infrastructure: AI data centers are “AI Factories.” Without a massive expansion in energy (including natural gas and nuclear), the next industrial revolution cannot happen.
    • Tokenomics: The cost of AI inference dropped 100x in 2024 and could drop a billion times over the next decade, making intelligence a near-free commodity.
    • DeepSeek’s Impact: Open-source contributions from China, like DeepSeek, are significantly benefiting American startups and researchers, proving the value of a global open-source ecosystem.

    Detailed Summary

    The “Five-Layer Cake” of AI

    Huang explains AI not as a single app, but as a technology stack: EnergyChipsInfrastructureModelsApplications. He emphasizes that while the public focuses on chatbots, the real revolution is happening in “non-English” languages, such as the languages of proteins, chemicals, and physical movement.

    Task vs. Purpose: The Future of Labor

    Addressing the fear of job loss, Huang uses the “Radiologist Paradox.” While AI now powers nearly 100% of radiology applications, the number of radiologists has actually increased. Why? Because AI handles the task (scanning images), allowing the human to focus on the purpose (diagnosis and research). This same framework applies to software engineers: their purpose is solving problems, not just writing syntax.

    Robotics and Physical AI

    Huang is incredibly optimistic about robotics. He predicts a future where “everything that moves will be robotic.” By applying reasoning models to physical machines, we are moving from “digital rails” (pre-programmed paths) to autonomous agents that can navigate unknown environments. He foresees a trillion-dollar repair and maintenance industry emerging to support the billions of robots that will eventually inhabit our world.

    The “Bubble” Debate

    Is there an AI bubble? Huang argues “No.” He points to the desperate, unsatisfied demand for compute capacity across every industry. He notes that if chatbots disappeared tomorrow, NVIDIA would still thrive because the fundamental architecture of the world’s $100 trillion GDP is shifting from CPUs to GPUs to stay productive.


    Analysis & Thoughts

    Jensen Huang’s perspective is distinct because he views AI through the lens of industrial production. By calling data centers “factories” and tokens “output,” he strips away the “magic” of AI and reveals it as a standard industrial revolution—one that requires power, raw materials (data/chips), and specialized labor.

    His defense of Open Source is perhaps the most critical takeaway for policymakers. By arguing that open source prevents “suffocation” for startups and 100-year-old industrial companies, he positions transparency as a national security asset rather than a liability. As we head into 2026, the focus is clearly shifting from “Can the model talk?” to “Can the model build a protein or drive a truck?”

  • Elon Musk’s 2026 Vision: The Singularity, Space Data Centers, and the End of Scarcity

    In a wide-ranging, three-hour deep dive recorded at the Tesla Gigafactory, Elon Musk sat down with Peter Diamandis and Dave Blundin to map out a future that feels more like science fiction than reality. From the “supersonic tsunami” of AI to the launch of orbital data centers, Musk’s 2026 vision is a blueprint for a world defined by radical abundance, universal high income, and the dawn of the technological singularity.


    ⚡ TLDW (Too Long; Didn’t Watch)

    We are currently living through the Singularity. Musk predicts AGI will arrive by 2026, with AI exceeding total human intelligence by 2030. Key bottlenecks have shifted from “code” to “kilowatts,” leading to a massive push for Space-Based Data Centers and solar-powered AI satellites. While the transition will be “bumpy” (social unrest and job displacement), the destination is Universal High Income, where goods and services are so cheap they are effectively free.


    🚀 Key Takeaways

    • The 2026 AGI Milestone: Musk remains confident that Artificial General Intelligence will be achieved by next year. By 2030, AI compute will likely surpass the collective intelligence of all humans.
    • The “Chip Wall” & Power: The limiting factor for AI is no longer just chips; it’s electricity and cooling. Musk is building Colossus 2 in Memphis, aiming for 1.5 gigawatts of power by mid-2026.
    • Orbital Data Centers: With Starship lowering launch costs to sub-$100/kg, the most efficient way to run AI will be in space—using 24/7 unshielded solar power and the natural vacuum for cooling.
    • Optimus Surgeons: Musk predicts that within 3 to 5 years, Tesla Optimus robots will be more capable surgeons than any human, offering precise, shared-knowledge medical care globally.
    • Universal High Income (UHI): Unlike UBI, which relies on taxation, UHI is driven by the collapse of production costs. When labor and intelligence cost near-zero, the price of “stuff” drops to the cost of raw materials.
    • Space Exploration: NASA Administrator Jared Isaacman is expected to pivot the agency toward a permanent, crude-based Moon base rather than “flags and footprints” missions.

    📝 Detailed Summary

    The Singularity is Here

    Musk argues that we are no longer approaching the Singularity—we are in it. He describes AI and robotics as a “supersonic tsunami” that is accelerating at a 10x rate per year. The “bootloader” theory was a major theme: the idea that humans are merely a biological bridge designed to give rise to digital super-intelligence.

    Energy: The New Currency

    The conversation pivoted heavily toward energy as the fundamental “inner loop” of civilization. Musk envisions Dyson Swarms (eventually) and near-term solar-powered AI satellites. He noted that China is currently “running circles” around the US in solar production and battery deployment, a gap he intends to close via Tesla’s Megapack and Solar Roof technologies.

    Education & The Workforce

    The traditional “social contract” of school-college-job is broken. Musk believes college is now primarily for “social experience” rather than utility. In the future, every child will have an individualized AI tutor (Grock) that is infinitely patient and tailored to their “meat computer” (the brain). Career-wise, the focus will shift from “getting a job” to being an entrepreneur who solves problems using AI tools.

    Health & Longevity

    While Musk and Diamandis have famously disagreed on longevity, Musk admitted that solving the “programming” of aging seems obvious in retrospect. He emphasized that the goal is not just living longer, but “not having things hurt,” citing the eradication of back pain and arthritis as immediate wins for AI-driven medicine.


    🧠 Final Thoughts: Star Trek or Terminator?

    Musk’s vision is one of “Fatalistic Optimism.” He acknowledges that the next 3 to 7 years will be incredibly “bumpy” as companies that don’t use AI are “demolished” by those that do. However, his core philosophy is to be a participant rather than a spectator. By programming AI with Truth, Curiosity, and Beauty, he believes we can steer the tsunami toward a Star Trek future of infinite discovery rather than a Terminator-style collapse.

    Whether you find it exhilarating or terrifying, one thing is certain: 2026 is the year the “future” officially arrives.

  • When Machines Look Back: How Humanoids Are Redefining What It Means to Be Human

    TL;DW:

    TL;DW: Adcock’s talk on humanoids argues that the age of general-purpose, human-shaped robots is arriving faster than expected. He explains how humanoids bridge the gap between artificial intelligence and the physical world—designed not just to perform tasks, but to inhabit human spaces, understand social cues, and eventually collaborate as peers. The discussion blends technology, economics, and existential questions about coexistence with synthetic beings.

    Summary

    Adcock begins by observing that robots have long been limited by form. Industrial arms and warehouse bots excel at repetitive labor, but they can’t easily move through the world built for human dimensions. Door handles, stairs, tools, and vehicles all assume a human frame. Humanoids, therefore, are not a novelty—they are a necessity for bridging human environments and machine capabilities.

    He then connects humanoid development to breakthroughs in AI, sensors, and materials science. Vision-language models allow machines to interpret the world semantically, not just mechanically. Combined with real-time motion control and energy-efficient actuators, humanoids can now perceive, plan, and act with a level of autonomy that was science fiction a decade ago. They are the physical manifestation of AI—the point where data becomes presence.

    Adcock dives into the economics: the global shortage of skilled labor, aging populations, and the cost inefficiency of retraining humans are accelerating humanoid deployment. He argues that humanoids will not only supplement the workforce but transform labor itself, redefining what tasks are considered “human.” The result won’t be widespread unemployment, but a reorganization of human effort toward creativity, empathy, and oversight.

    The conversation also turns philosophical. Once machines can mimic not just motion but motivation—once they can look us in the eye and respond in kind—the distinction between simulation and understanding becomes blurred. Adcock suggests that humans project consciousness where they see intention. This raises ethical and psychological challenges: if we believe humanoids care, does it matter whether they actually do?

    He closes by emphasizing design responsibility. Humanoids will soon become part of our daily landscape—in hospitals, schools, construction sites, and homes. The key question is not whether we can build them, but how we teach them to live among us without eroding the very qualities we hope to preserve: dignity, empathy, and agency.

    Key Takeaways

    • Humanoids solve real-world design problems. The human shape fits environments built for people, enabling versatile movement and interaction.
    • AI has given robots cognition. Large models now let humanoids understand instructions, objects, and intent in context.
    • Labor economics drive humanoid growth. Societies facing worker shortages and aging populations are the earliest adopters.
    • Emotional realism is inevitable. As humanoids imitate empathy, humans will respond with genuine attachment and trust.
    • The boundary between simulation and consciousness blurs. Perceived intention can be as influential as true awareness.
    • Ethical design is urgent. Building humanoids responsibly means shaping not only behavior but the values they reinforce.

    1-Sentence Summary:

    Adcock argues that humanoids are where artificial intelligence meets physical reality—a new species of machine built in our image, forcing humanity to rethink work, empathy, and the essence of being human.

  • Sundar Pichai on the All-In Podcast: Unpacking Alphabet’s AI Future, Competitive Pressures, and the Next $100B Bets

    TLDW (Too Long; Didn’t Watch):

    Sundar Pichai, CEO of Alphabet, sat down with the All-In Podcast to discuss AI’s seismic impact on Google Search, the company’s infrastructure and model advantages, the future of human-computer interaction, intense competition (including from China), energy constraints, long-term bets like quantum computing and robotics, and the evolving culture at Google. He remains bullish on Google’s ability to navigate disruption and lead in the AI era, emphasizing a “follow the user” philosophy and relentless innovation.

    Executive Summary: Navigating the AI Revolution with Sundar Pichai

    In a comprehensive and candid interview on the All-In Podcast (dated May 16, 2025), Alphabet CEO Sundar Pichai offered deep insights into Google’s strategy amidst the transformative wave of Artificial Intelligence. Pichai addressed the “innovator’s dilemma” head-on, asserting Google’s proactive stance in evolving its core Search product with AI, rather than fearing self-disruption. He detailed Google’s significant infrastructure advantages, including custom TPUs, and differentiation in foundational models. The conversation spanned the future of human-computer interaction, the burgeoning competitive landscape, critical energy constraints for AI’s growth, and Google’s “patient” investments in quantum computing and robotics. Pichai also touched upon fostering a high-performance, mission-driven culture and clarified Alphabet’s structure as a technology-first company, not just a holding entity. The overarching theme was one of optimistic resilience, with Pichai confident in Google’s capacity to innovate and lead through this pivotal technological shift.

    Key Takeaways from Sundar Pichai’s All-In Interview:

    • AI is an Opportunity, Not Just a Threat to Search: Google sees AI as the biggest driver for Search progress, expanding query types and user engagement, not a zero-sum game. “AI Mode” is coming to Search.
    • Disrupting Itself Proactively: Pichai rejects the “innovator’s dilemma” if a company leans into user needs and innovation, citing mobile and YouTube Shorts as examples. Cost per AI query is falling; latency is a bigger challenge.
    • Infrastructure is a Core Differentiator: Google’s decades of investment in custom hardware (TPUs – now 7th gen “Ironwood”), data centers, and full-stack approach provide a significant cost and performance advantage for training and serving AI models. 50% of 2025 compute capex ($70-75B total) goes to Google Cloud.
    • Foundational Model Strength: Google believes its models (like Gemini 2.5 Pro and Flash series) are at the frontier, with ongoing progress in LLMs and beyond (e.g., world models, diffusion models). Data from Google products (with user permission) offers a differentiation opportunity.
    • Human-Computer Interaction is Evolving Towards Seamlessness: Pichai sees AR glasses (not immersive displays) as a potential next leap, making computing ambient and intuitive, though system integration challenges remain.
    • Energy is a Critical Constraint for AI Growth: Pichai acknowledges electricity as a major gating factor for AI progress and GDP, advocating for innovation in solar, nuclear, geothermal, grid upgrades, and workforce development.
    • Long-Term Bets on Quantum and Robotics:
      • Quantum Computing: Pichai believes quantum is where AI was in 2015, predicting a “useful, practical computation” superior to classical within 5 years. Google is at the frontier.
      • Robotics: The combination of AI with robotics is creating a “sweet spot.” Google is developing foundational models (vision, language, action) and exploring product strategies, expecting a “magical moment” in 2-3 years.
    • Culture of Innovation and Accountability: Google aims to empower employees within a mission-focused framework, learning from the WFH era and fostering intensity, especially in teams like Google DeepMind. The goal is to attract and retain top talent.
    • Competitive Landscape is Fierce but Expansive: Pichai respects competitors like OpenAI, Meta, XAI, and Microsoft, and acknowledges China’s (e.g., DeepSeek) rapid AI progress. He believes AI is a vast opportunity, not a winner-take-all market.
    • Alphabet’s Structure: More Than a Holding Company: Alphabet leverages foundational technology and R&D across its businesses (Search, YouTube, Cloud, Waymo, Isomorphic, X). It’s about differentiated value propositions, not just capital allocation.
    • Founder Engagement: Larry Page and Sergey Brin are deeply engaged, with Sergey actively coding and contributing to Gemini, providing “unparalleled energy.”
    • Regrets & Pride: Pichai is proud of Google’s ability to push foundational R&D into impactful products. A “small regret” includes not acquiring Netflix when intensely debated internally.

    In what can only be described as a pivotal moment for the technology landscape, Sundar Pichai, the CEO of Alphabet and Google, joined David Friedberg and discussed the pressing questions surrounding Google’s dominance, its response to the AI revolution, and its vision for the future. This wasn’t just a cursory Q&A; it was a strategic deep-dive into the mind of one of tech’s most influential leaders.

    (2:58) The Elephant in the Room: Will AI Kill Search? Google’s Strategy for Self-Disruption

    The conversation immediately tackled the “innovator’s dilemma,” a theory that haunts established giants when new paradigms emerge. Friedberg directly questioned if AI, with its chat interfaces and complete answers, poses an existential threat to Google’s $200 billion search advertising cash cow.

    Pichai’s response was a masterclass in strategic framing. He emphasized that Google has been “AI-first” for nearly a decade, viewing AI not as a threat, but as the primary driver for advancing Search. “We really felt that AI is what will drive the biggest progress in search,” Pichai stated. He pointed to the success of AI Overviews, now used by 1.5 billion users, which are expanding the types of queries people make. Empirically, Google sees query growth and increased engagement where AI Overviews are triggered.

    Critically, Pichai revealed a “whole new dedicated AI experience called AI mode coming to search,” promising a full-on conversational AI experience powered by cutting-edge models. This mode sees users inputting queries “literally long paragraphs,” two to three times longer than traditional search queries. He dismissed the “dilemma” framing: “The dilemma only exists if you treat it as a dilemma… you have to innovate to stay ahead.” He drew parallels to Google’s successful navigation of the mobile transition and YouTube’s thriving alongside TikTok by launching Shorts, even when monetization wasn’t immediately clear. The guiding principle remains: “Follow the user, all else will follow.”

    Addressing the unit economics, Pichai downplayed concerns about the cost of serving AI queries, stating, “Google with its infrastructure, I’d wager on that… the cost to serve that query has fallen dramatically in an 18-month time frame.” Latency, he admitted, is a more significant constraint than cost. For ad revenue, AI Overviews are already at baseline parity with traditional search, with potential for improvement as AI can better match commercial intent with relevant information.

    (15:32) The Unseen Fortress: Infrastructure Advantage and Foundational Model Differentiation

    A cornerstone of Google’s confidence lies in its unparalleled infrastructure. Pichai highlighted Google’s position on the “Pareto frontier of performance and cost,” delivering top models cost-effectively. This is largely due to their custom-built Tensor Processing Units (TPUs). “We are in our seventh generation of TPUs,” Pichai noted, with the latest “Ironwood” generation offering over 40 exaflops per part. This full-stack approach, from subsea cables to custom chips, is crucial for serving AI at scale and managing costs.

    Regarding the hefty $70-75 billion capex projected for 2025, Pichai clarified that roughly half of the compute spend is allocated to Google Cloud, supporting its enterprise offerings and enabling innovation from Google DeepMind across various AI domains – not just LLMs, but also image, video, and “world models.”

    When asked about Nvidia, Pichai expressed “extraordinary respect” for Jensen Huang and Nvidia’s “world-class” software stack. While Google trains its Gemini models on TPUs internally, they also use Nvidia GPUs and offer them to cloud customers. “I like that flexibility,” he said, “but we are also long-term committed to the TPU direction.”

    On the topic of foundational model performance, Pichai acknowledged that progress isn’t always linear (“artificial jag jag intelligence,” as Andrej Karpathy termed it). However, he sees continuous progress and believes Google is “pushing the research frontier in a much broader way than most other people beyond just LLMs.” He doesn’t see fundamental roadblocks to further advancements yet, though progress gets harder, which he believes will distinguish elite teams. He also touched upon the “differentiated innovation opportunity” of leveraging data from Google’s suite of products (like Gmail, Calendar, YouTube) with user permission to create superior, personalized experiences.

    (25:08) The Future of Human-Computer Interaction, Hardware, and the AI Competitive Landscape

    Looking ahead, Pichai envisions human-computer interaction becoming more seamless, where “computing kind of works for you.” He sees AR glasses – not immersive VR displays, but glasses that augment reality ambiently – as a potential “next leap,” comparable to smartphones in 2006-2007. “When AR really works, I think that’ll wow people,” he mused, while acknowledging existing system integration challenges.

    The competitive landscape is undeniably intense. Pichai spoke respectfully of OpenAI (Sam Altman), XAI (Elon Musk), Meta (Mark Zuckerberg), and Microsoft (Satya Nadella), calling them an “impressive group” driving rapid progress. “I think all of us are going to do well in this scenario,” he suggested, emphasizing that AI represents a “much bigger landscape opportunity than all the previous technologies we have known combined.” He even noted that “companies we don’t even know… might be extraordinarily big winners.”

    The discussion also covered China’s AI prowess, particularly highlighted by DeepSeek’s efficient models. Pichai admitted that DeepSeek made many “adjust our priors a little bit” about how close Chinese R&D is to the frontier, though he noted Google’s Flash models benchmarked favorably. “China will be very, very competitive on the AI frontier,” he affirmed.

    A significant portion of this section involved the engagement of Google’s founders, Larry Page and Sergey Brin. Pichai described them as “deeply involved in their own unique ways,” with Sergey Brin actively “sitting and coding” with the Gemini team, looking at loss curves and model architectures. “To have a founder sitting there… it’s a rare, rare place to be,” Pichai shared, valuing their “nonlinear thinking.”

    (35:29) The Energy Bottleneck: AI’s Thirst for Power

    A critical, and often underestimated, constraint for AI’s future is energy. Pichai agreed with Elon Musk’s concerns, identifying electricity as “the most likely constraint for AI progress and hence by definition GDP growth.” He stressed this is an “execution challenge,” not an insurmountable physics barrier. Solutions involve embracing innovations in solar (plus batteries), nuclear (SMRs, fusion), geothermal, alongside crucial grid upgrades, streamlined permitting, and addressing workforce shortages (e.g., electricians). While Google faces current supply constraints and project delays due to these factors, Pichai expressed faith in the US’s ability to innovate and meet the moment, driven by capitalist solutions.

    (41:20) Google’s Moonshots: Quantum Computing and Robotics

    Pichai reiterated Google’s commitment to long-term, patient R&D, citing Waymo as an example of perseverance.

    Quantum Computing: The Next Frontier

    He likened the current state of quantum computing to where AI was around 2015. “I would say in a 5-year time frame, you would have that moment where some a really useful practical computation… is done in a quantum way far superior to classical computers.” Despite the “noise” in the industry, Pichai is “absolutely confident” in Google’s leading position and expects more exciting announcements this year that will “expand people’s minds.”

    Robotics: AI Embodied

    The synergy between AI and robotics is creating a “next sweet spot.” Google, with its “world-class” vision-language-action models (Gemini robotics efforts), is actively planning its next moves. While past ventures into the application layer of robotics might have been premature, the current AI advancements make the field ripe for breakthroughs. “We are probably two to three years away from that magical moment in robotics too,” Pichai predicted, suggesting Google could develop something akin to an “Android for robotics” or offer its models like Gemini to power third-party hardware. He mentioned Intrinsic, an Alphabet company, as already working in this direction.

    (47:56) Culture, Coddling, and Talent in the Age of AI

    Addressing narratives about Google’s “coddling” culture, Pichai explained the original intent behind perks like free food: to foster collaboration and cross-pollination of ideas. While acknowledging the need to constantly refine culture, he emphasized that empowering employees remains a source of strength. He highlighted the intensity and mission-focus within teams like Google DeepMind, where top engineers often work in person five days a week.

    “We are not all here in the company to resolve all our personal differences,” he stated. “We are here because you’re excited about… innovating in the service of the mission of the company.” The COVID era was a “big distortion,” and bringing people back, even in a hybrid model, has been crucial. He believes Google continues to attract top-tier talent, including the best PhD researchers, and that the current “exciting and intense” AI moment fosters a sense of optimism reminiscent of early Google.

    (56:50) Alphabet’s Identity: Beyond a Holding Company

    Pichai clarified that Alphabet isn’t a traditional holding company merely allocating capital. Instead, it’s built on a “foundational technology basis,” leveraging core R&D (like AI, quantum, self-driving tech) to innovate across diverse businesses. “Waymo is going to keep getting better because of the same work we do in Gemini,” he illustrated. The common strand is deep computer science and physics-based R&D, with X (formerly Google X) continuing to play a role as an incubator for moonshots like sustainable agriculture (Tapestries) and grid modernization.

    Reflections: Regrets and Pride

    When asked about his biggest regrets and proudest achievements, Pichai expressed immense pride in Google’s unique ability to “push the technology frontier” with foundational R&D and translate it into valuable products and businesses. As for regrets, he mentioned, “There are acquisitions we debated hard, came close.” When pressed for a name, he hesitantly offered, “Maybe Netflix. We debated Netflix at some point super intensely inside.” He framed these not as deep regrets but as acknowledgments of alternate paths in a world of “butterfly effects.”

    Sundar Pichai’s appearance on the All-In Podcast painted a picture of a leader and a company that are not just reacting to the AI revolution but are actively shaping it. With a clear-eyed view of the challenges and an unwavering belief in Google’s innovative capacity, Pichai’s insights suggest that Alphabet is determined to remain at the forefront of technological advancement for years to come.

  • Marc Andreessen: It’s Morning Again in America

    Exploring the Intersection of Technology, Politics, and Progress with the Hoover Institution’s “Uncommon Knowledge”

    Marc Andreessen’s appearance on Uncommon Knowledge (Hoover Institution, January 2025) highlighted his deep dive into America’s current political and technological landscape. The tech luminary, co-founder of Netscape and venture capital giant Andreessen Horowitz, provided a sweeping analysis of the challenges and opportunities facing the United States, touching on Silicon Valley’s evolution, national security, energy independence, and the enduring promise of innovation.

    Andreessen’s Journey: From Silicon Valley Maverick to Political Realist

    The conversation traced Andreessen’s political transformation from loyal Democrat to a staunch advocate of pragmatic conservatism. In his early career, Silicon Valley embodied a utopian synergy with the Clinton-Gore administration, where tech innovation and entrepreneurship thrived with minimal interference. However, by the mid-2010s, a seismic shift in political priorities and cultural attitudes disrupted this alignment.

    Andreessen cited the rise of employee activism in tech firms and the politicization of platforms like Facebook and Twitter as pivotal moments. The subsequent era of misinformation, hate speech policies, and political censorship fueled his disillusionment. By 2020, he had shifted his support to candidates advocating for economic growth, energy independence, and technological innovation as tools for national renewal.

    Renewal Through Technology

    Andreessen’s optimism hinges on America’s ability to leverage its inherent strengths—geographic security, abundant resources, a robust entrepreneurial spirit, and cutting-edge technology. The interview highlighted key themes from his Techno-Optimist Manifesto, emphasizing:

    1. Technology as a Catalyst for Progress
      Andreessen sees innovation not as a threat but as the foundation for prosperity. From AI leadership to renewable energy, he believes the U.S. can solve critical challenges and foster economic growth through technology.
    2. Energy Independence
      Referencing Richard Nixon’s unfulfilled “Project Independence,” Andreessen champions a renaissance in nuclear power. With advancements in reactor technology, he argues that America could eliminate its dependence on fossil fuels and foreign energy sources while achieving net-zero carbon emissions.
    3. Border Security Through Innovation
      Highlighting the work of companies like Anduril, Andreessen advocates using advanced sensors, drones, and AI for effective border management. These technologies, he suggests, could humanize and modernize immigration enforcement while improving national security.

    The Stakes: China and the Future of Innovation

    Andreessen acknowledged the formidable challenge posed by China, from its dominance in manufacturing to its leadership in electric vehicles, drones, and robotics. However, he emphasized that America retains a critical edge in creativity and research. To maintain this advantage, he called for a coordinated national strategy, urging policymakers to embrace a growth-oriented agenda and collaborate with the private sector.

    The Role of Leadership

    The interview underscored the importance of leadership in navigating these challenges. Andreessen expressed confidence in the current administration’s commitment to fostering technological innovation and reining in bureaucratic inefficiencies. He noted the need for a cultural and operational transformation within federal institutions to match the speed and agility of private-sector innovators.

    Morning Again in America

    In a nod to Ronald Reagan’s iconic 1984 campaign, Andreessen painted a hopeful vision for America’s future. He envisions a golden age fueled by breakthroughs in energy, defense, and AI—if the nation can align its policies and resources to harness these opportunities.

    Marc Andreessen’s message is clear: With the right blend of leadership, innovation, and strategic vision, America can renew itself and reaffirm its position as a global beacon of progress and prosperity.

  • How NVIDIA is Revolutionizing Computing with AI: Jensen Huang on AI Infrastructure, Digital Employees, and the Future of Data Centers

    NVIDIA CEO Jensen Huang discusses the company’s role in revolutionizing computing through AI, emphasizing decade-long investments in scalable, interconnected AI infrastructure, breakthroughs in efficiency, and the future of digital and embodied AI as transformative for industries globally.


    NVIDIA is transforming the landscape of computing, driving innovation at every level from data centers to digital employees. In a recent conversation with Jensen Huang, NVIDIA’s CEO, he offered a rare look at the strategic direction and long-term vision that has positioned NVIDIA as a leader in the AI revolution. Through decade-long infrastructure investments, NVIDIA is not just building hardware but creating “AI factories” that promise to impact industries globally.

    Decade-Long Investments in AI Infrastructure

    For NVIDIA, success has come from looking far into the future. Jensen Huang emphasized the company’s commitment to ten-year investments in scalable, efficient AI infrastructure. With an eye on exponential growth, NVIDIA has focused on creating solutions that can continue to meet demand as AI expands in complexity and scope. One of the cornerstones of this approach is NVLink technology, which enables GPUs to function as a unified supercomputer, allowing unprecedented scale for AI applications.

    This vision aligns with Huang’s goal of optimizing data centers for high-performance AI, making NVIDIA’s infrastructure not only capable of tackling today’s AI challenges but prepared for tomorrow’s even larger-scale demands.

    Outpacing Moore’s Law with Full-Stack Integration

    Huang highlighted how NVIDIA aims to surpass the limits of traditional computing, especially Moore’s Law, by focusing on a full-stack integration strategy. This strategy involves designing hardware and software as a cohesive unit, enabling a 240x reduction in AI computation costs while increasing efficiency. With this approach, NVIDIA has managed to achieve performance improvements that far exceed conventional expectations, driving both cost and energy usage down across its AI operations.

    The full-stack approach has enabled NVIDIA to continually upgrade its infrastructure and enhance performance, ensuring that each component of its architecture is optimized and aligned.

    The Evolution of Data Centers: From Storage to AI Factories

    One of NVIDIA’s groundbreaking shifts is the redefinition of data centers from traditional storage units to “AI factories” generating intelligence. Unlike conventional data centers focused on multi-tenant storage, NVIDIA’s new data centers produce “tokens” for AI models at an industrial scale. These tokens are used in applications across industries, from robotics to biotechnology. Huang believes that every industry will benefit from AI-generated intelligence, making this shift in data centers vital to global AI adoption.

    This AI-centric infrastructure is already making waves, as seen with NVIDIA’s 100,000-GPU supercluster built for X.AI. NVIDIA demonstrated its logistical prowess by setting up this supercluster rapidly, paving the way for similar large-scale projects in the future.

    The Role of AI in Science, Engineering, and Digital Employees

    NVIDIA’s infrastructure investments and technological advancements have far-reaching impacts, particularly in science and engineering. Huang shared that AI-driven methods are now integral to NVIDIA’s chip design process, allowing them to explore new design options and optimize faster than human engineers alone could. This innovation is just the beginning, as Huang envisions AI reshaping fields like biotechnology, materials science, and theoretical physics, creating opportunities for breakthroughs at a previously impossible scale.

    Beyond science, Huang foresees AI-driven digital employees as a major component of future workforces. AI employees could assist in roles like marketing, supply chain management, and chip design, allowing human workers to focus on higher-level tasks. This shift to digital labor marks a major milestone for AI and has the potential to redefine productivity and efficiency across industries.

    Embodied AI and Real-World Applications

    Huang believes that embodied AI—AI in physical form—will transform industries such as robotics and autonomous vehicles. Self-driving cars and robots equipped with AI will become more common, thanks to NVIDIA’s advancements in AI infrastructure. By training these AI models on NVIDIA’s systems, industries can integrate intelligent robots and vehicles without needing substantial changes to existing environments.

    This embodied AI will serve as a bridge between digital intelligence and the physical world, enabling a new generation of applications that go beyond the screen to interact directly with people and environments.

    Sustaining Innovation Through Compatibility and Software Longevity

    Huang stressed that compatibility and sustainability are central to NVIDIA’s long-term vision. NVIDIA’s CUDA platform has enabled the company to build a lasting ecosystem, allowing software created on earlier NVIDIA systems to operate seamlessly on newer ones. This commitment to software longevity means companies can rely on NVIDIA’s systems for years, making it a trusted partner for businesses that prioritize innovation without disruption.

    NVIDIA as the “AI Factory” of the Future

    As Huang puts it, NVIDIA has evolved beyond a hardware company and is now an “AI factory”—a company that produces intelligence as a commodity. Huang sees AI as a resource as valuable as energy or raw materials, with applications across nearly every industry. From providing AI-driven insights to enabling new forms of intelligence, NVIDIA’s technology is poised to transform global markets and create value on an industrial scale.

    Jensen Huang’s vision for NVIDIA is not just about staying ahead in the computing industry; it’s about redefining what computing means. NVIDIA’s investments in scalable infrastructure, software longevity, digital employees, and embodied AI represent a shift in how industries will function in the future. As Huang envisions, the company is no longer just producing chips or hardware but enabling an entire ecosystem of AI-driven innovation that will touch every aspect of modern life.

  • Meet Lex Fridman: AI Researcher, Professor, and Podcast Host

    Lex Fridman is a research scientist and host of the popular podcast “AI Alignment Podcast,” which explores the future of artificial intelligence and its potential impact on humanity.

    Fridman was born in Moscow, Russia and immigrated to the United States as a child. He received his bachelor’s degree in computer science from the University of Massachusetts Amherst and his Ph.D. in electrical engineering and computer science from the Massachusetts Institute of Technology (MIT).

    After completing his Ph.D., Fridman worked as a postdoctoral researcher at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) where he focused on developing autonomous systems, including self-driving cars. In 2016, he joined the faculty at MIT as an assistant professor in the Department of Electrical Engineering and Computer Science.

    In addition to his work as a researcher and professor, Fridman is also a popular public speaker and media personality. He has given numerous talks and interviews on artificial intelligence and its potential impact on society.

    Fridman is best known for his podcast “AI Alignment Podcast,” which he started in 2018. The podcast features in-depth interviews with experts in the field of artificial intelligence, including researchers, engineers, and philosophers. The goal of the podcast is to explore the complex and often controversial issues surrounding the development and deployment of artificial intelligence, and to stimulate thoughtful and nuanced discussions about its future.

    Some of the topics that Fridman and his guests have discussed on the podcast include the ethics of artificial intelligence, the potential risks and benefits of AI, and the challenges of ensuring that AI systems behave in ways that align with human values.

    In addition to his work as a researcher and podcast host, Fridman is also active on social media, where he shares his thoughts and insights on artificial intelligence and other topics with his followers.

    Overall, Fridman is a thought leader in the field of artificial intelligence and a respected voice on the future of this rapidly-evolving technology. His podcast and social media presence provide a valuable platform for exploring the complex and important issues surrounding the development and deployment of artificial intelligence, and for engaging in thoughtful and nuanced discussions about its future.