November 20, 2025 – NVIDIA just delivered the most dominant quarter in the history of tech and told the world the next one will be even bigger. The market is partying like it’s 2021.
TL;DR
Revenue $57.01B (+62% YoY, beat by ~$1.8–2B)
Data Center $51.2B (+66% YoY, +$10B sequentially) – now 90% of total revenue
GAAP EPS $1.30 (+67% YoY)
Q4 guidance $65B (±2%) – obliterates street $61.98B (some buyside whispers were $75B → Jensen sandbagging again)
Blackwell sales “off the charts”, cloud GPUs completely sold out for the foreseeable future
CFO Colette Kress confirmed ≈$500B Blackwell + Rubin revenue visibility 2025–2026 (analysts now calling it $500B pipeline through FY2027)
Gross margin 73.6% (tiny miss due to Blackwell ramp costs), guided back to 75.0% next quarter
Free cash flow $22.1B in a single quarter
Top 4 customers = 61% of revenue (22% / 15% / 13% / 11%) – concentration risk is real but demand makes it a feature
Stock ripped +5.5% after-hours → +$220B+ market cap in minutes, lifting entire AI complex
Key Takeaways
Demand is not slowing — it’s compounding. Jensen: “Compute demand keeps accelerating and compounding across training and inference — each growing exponentially. We’ve entered the virtuous cycle of AI.”
Blackwell ramp is unprecedented – already the majority of new Data Center mix, sold out for months, driving the entire $10B sequential jump
Gaming ($4.3B) and Automotive ($592M) missed estimates → literally nobody cares when Data Center grew $10B in one quarter
Customer concentration: Four hyperscalers = 61% of revenue. Everyone knows who they are. Everyone also knows they can’t build without NVIDIA
Margins dipped to 73.6% only because of Blackwell complexity/HBM costs – guided 75% next quarter, street relieved
Balance sheet is absurd: $60.6B cash + $22.1B quarterly FCF. Berkshire is only ~$320B ahead
Physical AI multi-trillion opportunity already “multi-billion” today
Detailed Summary
NVIDIA printed $57.01 billion in a single quarter — a number larger than the entire annual revenue of 99% of public companies. Data Center alone did $51.2 billion (+66% YoY, +$10 billion sequentially). Let that sink in.
Blackwell is not “ramping” — it’s exploding. It is already the majority of new Data Center revenue and cloud providers are in a literal bidding war for every wafer. Jensen was blunt: “Blackwell sales are off the charts, and cloud GPUs are sold out.”
Yes, Gaming and Automotive missed estimates (who cares), Pro Visualization crushed it (+56% YoY), but the only number that matters is the $500 billion in confirmed Blackwell + Rubin orders the company can already see through calendar 2026 (Bloomberg Intelligence now calling it $500B pipeline through fiscal 2027).
China export restrictions? Effectively $0 impact in guidance. The rest of the planet is making up for it and then some — sovereign AI factories, enterprises, everyone is building.
Networking (Spectrum-X + InfiniBand) up ~162% YoY to $8.2B+ — the hidden monster line item nobody talks about.
Market & Analyst Reaction
Initial spike was +4%, then kept climbing → closed extended trading up ~5.5%, adding north of $220 billion in market cap. Entire AI food chain ripping: CoreWeave +4%, Nebius +4%, AMD +2%, Micron +2%, Broadcom +2%, Super Micro +8%.
Goldman Sachs (James Schneider) first note post-earnings:
“Strong quarter with upside to guidance should provide relief for the stock… We expect the stock to trade higher following a stronger quarter and guidance relative to the Street.”
X was pure euphoria last night – here are some of the top posts (all >5K likes):
https://x.com/KobeissiLetter/status/1991255966235419112 ← +$205B market cap meme
https://x.com/amitisinvesting/status/1991263435493974047 ← Full breakdown thread
https://x.com/FromValue/status/1991275128439123451 ← “NVIDIA just printed more FCF than most companies make in revenue”
My Thoughts
This was a “relief rally on steroids”. Anyone still waiting for the AI capex slowdown just got obliterated. The $500 billion visibility isn’t hopium — it’s what they can already see in purchase orders.
The moat is now impenetrable: CUDA + NVLink + Spectrum-X + Grace CPU + Blackwell/Rubin roadmap = Microsoft Windows-level lock-in for the AI era.
At ~44× forward earnings the stock looks expensive until you realize the base case is now ~$260–280B annual revenue run-rate by late 2026. That puts the multiple in the low 20s. That is no longer the bull case — that’s the new floor.
The Christmas rally is officially back on. NVIDIA just saved it.
Bill Ackman’s Pershing Square Capital Management just released a 28-page investor presentation urging the Trump administration to immediately (1) deem the Treasury’s Senior Preferred Stock repaid, (2) exercise the 79.9% warrants, and (3) relist Fannie Mae (FNMA) and Freddie Mac (FMCC) on the NYSE — all while keeping the GSEs in conservatorship. They claim this can be done before the end of November 2025 and would instantly value the U.S. taxpayer’s stake at over $300 billion without disrupting mortgage affordability.
Key Takeaways
Fannie & Freddie OTC shares have already more than doubled in 2025 on Trump administration statements.
The three-step plan (repay SPS → exercise warrants → NYSE relisting) can be executed immediately by Treasury and FHFA.
Post-relisting, Treasury would own 79.9% of two NYSE-listed companies worth a combined ~$387 billion (Pershing estimate).
Taxpayers have already received $301 billion in dividends — $25 billion more than required under the original 10% deal.
Pershing strongly opposes any conversion of Senior Preferred into common — calls it value-destructive and legally risky.
Relisting unlocks massive institutional buying (many funds are barred from OTC stocks) and fulfills Trump’s campaign promise timing.
Conservatorship continues for years, giving the administration runway to finalize capital rules, backstop structure, and governance.
Detailed Summary of the Pershing Square Presentation (November 2025)
In a presentation titled “Promises Made, Promises Kept”, Pershing Square lays out a politically and financially attractive path for the second Trump administration to deliver on its GSE reform pledges without raising mortgage rates or rushing a full privatization.
The core argument: the government has already been fully repaid (and then some) via $301 billion of dividends since 2008. The Obama-era 2012 “Net Worth Sweep” was paused under Mnuchin, but never fully reversed. Pershing says a simple letter agreement between Treasury and FHFA can officially retire the Senior Preferred Stock today.
Once the SPS is gone, Treasury can exercise its long-held warrants for 79.9% of the common stock at essentially zero cost. The GSEs already meet every NYSE listing requirement (market cap, float, share price, shareholder count, etc.). FHFA can approve relisting while keeping full conservatorship powers intact — no change to operations, no new capital raises, no dividend payments to juniors until fully recapitalized.
Total taxpayer value: >$310 billion (plus junior preferred)
They explicitly reject the idea of converting Senior Preferred into common, warning it would trigger new litigation, force government consolidation onto the federal balance sheet, and slash valuations by 27–56% depending on the multiple the market would assign to a company that wiped out private shareholders.
My Thoughts
This is classic Ackman: aggressive, detailed, and perfectly timed to influence policy while he has a massive economic interest (Pershing owns large common positions in both GSEs). The beauty of the proposal is that it is genuinely low-risk from a mortgage-market standpoint and gives the administration an instant “win” before Thanksgiving 2025.
The politics line up perfectly: Trump gets to post on Truth Social that he turned two “bailed-out” companies into a $300 billion+ taxpayer windfall, keeps 30-year mortgage rates stable (or even lower), and still retains total control to shape the final exit over the next three years.
If Treasury and FHFA actually follow the three steps before November 30, 2025, the OTC-to-NYSE pop could be one of the largest wealth-transfer events in market history — and almost entirely to existing common shareholders (retail + hedge funds that held on since 2008).
Watch for any joint Treasury/FHFA announcement or letter agreement in the next two weeks. That will be the trigger.
Disclosure: Like Pershing Square, the author may have direct or indirect exposure to FNMA/FMCC securities.
Google just released Gemini 3 Pro – their smartest model ever. It crushes benchmarks in reasoning, coding, agentic workflows, and multimodal understanding. New tools include Google Antigravity (free agentic IDE), better bash/tool-calling, 1M context, and “vibe coding” that turns a single natural-language prompt or sketch into a full working app. Available today in Google AI Studio (free with limits) and via Gemini API at $2/$12 per million tokens.
Key Takeaways
Gemini 3 Pro is Google’s new flagship model (November 18, 2025) with state-of-the-art reasoning and agentic capabilities
Tops almost every major benchmark, including #1 on WebDev Arena (1487 Elo) and 54.2% on Terminal-Bench 2.0
New Google Antigravity – free public preview agentic development platform for Mac/Windows/Linux
1 million token context window + significantly better long-context usage than Gemini 2.5 Pro
Best-in-class multimodal: new SOTA on MMMU-Pro (image) and Video MMMU
Advanced “vibe coding”: build entire interactive apps/games from one prompt, voice note, or napkin sketch
New client-side & server-side bash tools, structured outputs + grounding, granular vision resolution control
Pricing (preview): $2/M input tokens, $12/M output tokens (≤200k context), free tiered after that
Free access (rate-limited) inside Google AI Studio right now
Already integrated into Cursor, Cline, JetBrains, Android Studio, GitHub, Emergent, OpusClip and many more
Detailed Summary of the Gemini 3 Launch
On November 18, 2025, Google officially introduced Gemini 3 Pro, calling it their “most intelligent model” to date. Built from the ground up for advanced reasoning and agentic behavior, it outperforms every previous Gemini version and sets new records across coding, multimodal, and general intelligence benchmarks.
Agentic Coding & Google Antigravity
The biggest highlight is the leap in agentic coding. Gemini 3 Pro scores 54.2% on Terminal-Bench 2.0 (vs 32.6% for Gemini 2.5 Pro) and handles complex, long-horizon tasks across entire codebases with far better context retention.
To showcase this, Google launched Google Antigravity – a brand-new, completely free agentic development platform (public preview for macOS, Windows, Linux). Developers act as architects while multiple autonomous agents work in parallel across editor, terminal, and browser, producing detailed artifacts and reports.
Vibe Coding & One-Prompt Apps
Gemini 3 Pro finally makes “vibe coding” real: describe an idea in plain English (or upload a sketch/voice note) and get a fully functional, interactive app in seconds. It currently sits at #1 on WebDev Arena with 1487 Elo. Google AI Studio’s new “Build mode” + “I’m feeling lucky” button lets anyone generate production-ready apps with almost zero code.
Multimodal Leadership
New SOTA on MMMU-Pro (complex image reasoning) and Video MMMU
Advanced document understanding far beyond OCR
Spatial reasoning for robotics, XR, autonomous vehicles
New client-side and hosted server-side bash tools for local/system automation
Grounding + URL context can now be combined with structured outputs
Granular control over vision fidelity (trade quality vs latency/cost)
New “thinking level” parameter and stricter thought-signature validation for reliable multi-turn reasoning
Pricing & Availability (as of Nov 18, 2025)
Gemini API (Google AI Studio & Vertex AI): $2 per million input tokens, $12 per million output tokens (prompts ≤200k tokens)
Free tier with rate limits in Google AI Studio
Immediate integration in Cursor, Cline, JetBrains, Android Studio, GitHub Copilot ecosystem, Emergent, OpusClip, etc.
My Thoughts
Gemini 3 Pro feels like the moment AI coding agents finally cross from “helpful assistant” to “can run an entire sprint by itself.” The combination of 1M context, 54% Terminal-Bench, and the new Antigravity IDE means developers can now delegate whole features or refactors to agents and actually trust the output.
The “vibe coding” demos (retro game from one prompt, full app from a hand-drawn sketch) are no longer parlor tricks – they are production-ready in Google AI Studio today. For indie hackers and prototyping teams this is an absolute game-changer.
Google pricing remains extremely aggressive ($2/$12) compared to some competitors, and giving Antigravity away for free is a bold move that will pull a huge portion of the agentic-dev-tool market toward their ecosystem overnight.
If you develop, design, or just have ideas – go download Antigravity and play with Gemini 3 Pro in AI Studio right now. 2026 is going to be built with this model.
Microsoft CEO Satya Nadella sat down with Stripe co-founder John Collison on the Cheeky Pint podcast in November 2025 for a wide-ranging, candid conversation about enterprise AI diffusion, data sovereignty, the durability of Excel, agentic commerce, and why today’s AI infrastructure build-out is fundamentally different from the 2000 dot-com bust.
TL;DW – The 2-Minute Version
AI is finally delivering “information at your fingertips” inside enterprises via Copilot + the Microsoft Graph
This CapEx cycle is supply-constrained, not demand-constrained – unlike the dark fiber of the dot-com era
Excel remains unbeatable because it is the world’s most approachable programming environment
Future of commerce = “agentic commerce” – Stripe + Microsoft are building the rails together
Company sovereignty in the AI age = your own continually-learning foundation model + memory + tools + entitlements
Satya “wanders the virtual corridors” of Teams channels instead of physical offices
Microsoft is deliberately open and modular again – echoing its 1980s DNA
Key Takeaways
Enterprise AI adoption is the fastest Microsoft has ever seen, but still early – most companies haven’t connected their full data graph yet
Data plumbing is finally happening because LLMs can make sense of messy, unstructured reality (not rigid schemas)
The killer app is “Deep Research inside the corporation” – Copilot on your full Microsoft 365 + ERP graph
We are in a supply-constrained GPU/power/shell boom, not a utilization bubble
Future UI = IDE-style “mission control” for thousands of agents (macro delegation + micro steering)
Agentic commerce will dominate discovery and directed search; only recurring staples remain untouched
Consumers will be loyal to AI brands/ensembles, not raw model IDs – defaults and trust matter hugely
Culture lesson: don’t let external memes (e.g. the “guns pointing inward” cartoon) define internal reality
Detailed Summary
The conversation opens with Nadella’s excitement for Microsoft Ignite 2025: the focus is no longer showing off someone else’s AI demo, but helping every enterprise build its own “AI factory.” The biggest bottleneck remains organizing the data layer so intelligence can actually be applied.
Copilot’s true power comes from grounding on the Microsoft Graph (email, docs, meetings, relationships) – something most companies still under-utilize. Retrieval, governance, and thick connectors to ERP systems are finally making the decades-old dream of “all your data at your fingertips” real.
Nadella reflects on Bill Gates’ 1990s obsession with “information management” and structured data, noting that deep neural networks unexpectedly solved the messiness problem that rigid schemas never could.
On bubbles: unlike the dark fiber overbuild of 2000, today Microsoft is sold out and struggling to add capacity fast enough. Demand is proven and immediate.
On the future of work: Nadella manages by “wandering Teams channels” rather than physical halls. He stays deeply connected to startups (he visited Stripe when it was tiny) because that’s where new workloads and aesthetics are born.
UI prediction: we’re moving toward personalized, generated IDEs for every profession – think “mission control” dashboards for orchestrating thousands of agents with micro-steering.
Excel’s immortality: it’s Turing-complete, instantly malleable, and the most approachable programming environment ever created.
Agentic commerce: Stripe and Microsoft are partnering to make every catalog queryable and purchasable by agents. Discovery and directed search will move almost entirely to conversational/AI interfaces.
Company sovereignty in the AI era: the new moat is your own fine-tuned foundation model (or LoRA layer) that continually learns your tacit knowledge, combined with memory, entitlements, and tool use that stay outside the base model.
Microsoft’s AI stack strategy: deliberately modular (infra, agent platform, horizontal & vertical Copilots) so customers can enter at any layer while still benefiting from integration when they want it.
My Thoughts
Two things struck me hardest:
Nadella is remarkably calm for someone steering a $3T+ company through the biggest platform shift in decades. There’s no triumphalism – just relentless focus on distribution inside enterprises and solving the boring data plumbing.
He genuinely believes the proprietary vs open debate is repeating: just as AOL/MSN lost to the open web only for Google/Facebook/App Stores to become new gatekeepers, today’s “open” foundation models will quickly sprout proprietary organizing layers (chat front-ends, agent marketplaces, vertical Copilots). The power accrues to whoever builds the best ensemble + tools + memory stack, not the raw parameter count.
If he’s right, the winners of this cycle will be the companies that ship useful agents fastest – not necessarily the ones with the biggest training clusters. That’s excellent news for Stripe, Microsoft, and any founder-focused company that can move quickly.
FINAL UPDATE – Post-Mortem Released: Cloudflare has released the detailed post-mortem for the November 18 event. The outage was caused by an internal software error triggered by a database permission change, not a cyberattack[cite: 25, 26]. Below is the technical breakdown of exactly what went wrong.
TL;DR – The Summary
Start Time: 11:20 UTC – Significant traffic delivery failures began immediately following a database update.
The Root Cause: A permission change to a ClickHouse database caused a “feature file” (used for Bot Management) to double in size due to duplicate rows[cite: 26, 27, 81].
The Failure: The file grew beyond a hard-coded limit (200 features) in the new “FL2” proxy engine, causing the Rust-based code to crash (panic)[cite: 190, 191, 194].
Resolution: 17:06 UTC – All systems fully restored (Main traffic recovered by 14:30 UTC)[cite: 32, 90].
The Technical Details: A “Panic” in the Proxy
The outage was a classic “cascading failure” scenario. Here is the simplified chain of events from the report:
The Trigger (11:05 UTC): Engineers applied a permission change to a ClickHouse database cluster to improve security. This inadvertently caused a query to return duplicate rows[cite: 160, 172].
The Bloat: This bad data flowed into a configuration file used by the Bot Management system, causing it to exceed its expected size[cite: 27, 125].
The Crash: Cloudflare’s proxy software (specifically the FL2 engine written in Rust) had a memory preallocation limit of 200 features. When the bloated file hit this limit, the code triggered a panic (specifically called Result::unwrap() on an Err value), causing the service to fail with HTTP 500 errors[cite: 190, 218, 219].
The Confusion: To make matters worse, Cloudflare’s external Status Page also went down (returning 504 Gateway Timeouts) due to a coincidence, leading engineers to initially suspect a massive coordinated cyberattack.
Official Timeline (UTC)
Time (UTC)
Status
Event Description
17:06
Resolved
All services resolved. Remaining long-tail services restarted and full operations restored[cite: 268].
14:30
Remediating
Main impact resolved. A known-good configuration file was manually deployed; core traffic began flowing normally [cite: 32, 268].
13:37
Identified
Engineers identified the Bot Management file as the trigger and stopped the automatic propagation of the bad file [cite: 268].
13:05
Mitigating
A bypass was implemented for Workers KV and Access to route around the failing proxy engine, reducing error rates [cite: 267].
11:20
Outage Starts
Network begins experiencing significant failures to deliver core traffic .
11:05
Trigger
Database access control change deployed[cite: 267].
Final Thoughts
Cloudflare’s CEO Matthew Prince was direct in the post-mortem: “We know we let you down today”[cite: 37]. The company has identified the specific code path that failed and is implementing “global kill switches” for features to prevent a single configuration file from taking down the network in the future[cite: 259].
November 18, 2025 – Bitcoin just printed $89,420, its lowest level since February. That single candle erased every bit of 2025’s gains and pushed the price down more than 29% from the October all-time high of $126,250.
The Crypto Fear & Greed Index is sitting at 11 – the lowest since the 2022 bear market. We are officially in “extreme fear” territory.
This is not a crypto-specific meltdown. BTC is simply doing what it always does in risk-off environments: acting like the most leveraged tech stock on earth.
What Actually Broke the Market
A confirmed death cross (50-day MA under 200-day MA) combined with a clean break of the 200-day moving average.
U.S. spot Bitcoin ETF inflows have gone from +$25B earlier this year to dead flat / negative for almost two weeks straight. BlackRock’s IBIT alone saw a record ~$1.26B outflow this month.
Corporate treasury buyers (MicroStrategy-style) have hit the pause button.
OG whales who stacked pre-2024 continue distributing at the fastest pace of the cycle.
Newer buyers (last 12–24 months) are panic-selling 20–50% gains because “the bull market peaked in 2025” FUD is everywhere.
“The recent sell off is from newish buyers… They bought in the last 12-18 months… and are ‘taking profit’ for 20%-30% fiat gains… This cohort of sellers is also depleted, and HODLers with conviction have now taken their coins.”
Sentiment this bad almost never lasts. On-chain data shows long-term holders are absorbing supply, Bitcoin dominance is spiking (alts getting wrecked first), and the Fear & Greed Index below 15 in a post-halving bull year has always marked monster reversals.
Top Bottom-Calling Tweets Right Now
“Bitcoin is literally free at $91,000. The bottom is near, and then BTC will pump straight to $150,000!!”
“I’ve only seen sentiment this bad 3 times before: 2018 lows, COVID crash, FTX collapse. Things can stay negative for awhile, but I find it hard to believe we’re closer to the top than to the bottom.”
On November 17, 2025, Valar Atomics and Los Alamos National Laboratory announced that the NOVA Core – a HALEU TRISO-fueled, graphite-moderated HTGR test assembly – successfully reached zero-power (“cold”) criticality at the National Criticality Experiments Research Center (NCERC) in Nevada. This marks the first time a venture-backed private nuclear company has ever achieved criticality, validating the physics of Valar’s upcoming Ward250 reactor and clearing a major technical de-risking milestone on the path to gigawatt-scale carbon-free power.
Key Takeaways
Zero-power criticality achieved at 11:45 AM PT on November 17, 2025
First criticality ever achieved by a venture-funded nuclear startup
Conducted at the United States’ only general-purpose critical experiments facility (NCERC, Nevada National Security Site)
Uses the exact same HALEU TRISO fuel, graphite moderator, and reactivity control scheme as the commercial Ward250 reactor
Directly validates Valar Atomics’ proprietary neutronics models and simulation stack
Builds on the 2024 Deimos critical assembly; NOVA is the high-fidelity physics twin of Ward250
Clears the path for hot (powered) criticality and full-temperature testing in 2026
Supported by DOE’s Advanced Reactor Pilot Program (target: full criticality by July 4, 2026) and Executive Order 14301
Strong public endorsement of the Trump administration’s “make nuclear great again” push
Detailed Summary of the Announcement
On November 17, 2025, Los Alamos National Laboratory (LANL) and Valar Atomics jointly announced that the NOVA Core, operating on LANL’s Comet critical assembly machine at the National Criticality Experiments Research Center (NCERC) inside the Nevada National Security Site (NNSS), had achieved zero-power criticality at exactly 11:45 AM Pacific Time.
Approach-to-critical experiments began on November 12, 2025, and the core went critical five days later – an impressively rapid and safe execution that highlights both Valar’s engineering maturity and NCERC’s world-class operational capability.
What is zero-power (“cold”) criticality?
Cold criticality is the moment when a nuclear core sustains a stable neutron chain reaction (k_eff = 1.000) without external neutron sources, but at room temperature and with essentially zero fission power (typically microwatts to a few watts). No heat is removed by coolant flow, and temperatures remain ambient. It is the nuclear equivalent of “first breath” or “first heartbeat” – proof that the fundamental physics of the core design works exactly as modeled.
Project NOVA (Nuclear Observations of Valar Atomics) is a multi-week campaign of criticality experiments designed to:
Measure integral neutronics parameters (reactivity coefficients, control rod worth, burnable poison performance, etc.)
Validate Valar’s in-house Monte Carlo and deterministic neutronics codes
Provide high-fidelity benchmark data for the Ward250 reactor currently under construction in Utah
The NOVA Core is a graphite-moderated, helium-cooled-concept test bed fueled with High-Assay Low-Enriched Uranium (HALEU) TRISO particles – the same fuel form and enrichment Valar will use commercially. Reactivity control is provided by boron-carbide elements in stainless-steel cladding, mirroring the Ward250 design.
The central portion of the core was designed and fabricated entirely by Valar Atomics, while LANL provided the Comet universal assembly machine, reflectors, instrumentation, safety envelope, and decades of criticality-safety expertise.
Quotes from Leadership
Isaiah Taylor (Founder & CEO, Valar Atomics): “Zero power criticality is a reactor’s first heartbeat, proof the physics holds… This moment marks the dawn of a new era in American nuclear engineering — one defined by speed, scale, and private-sector execution with closer federal partnership.”
Max Ukropina (Head of Projects): “President Trump asked industry and the labs to make nuclear great again. We got together and decided to start with the basics of fission. This team delivered incredible results safely so we can keep moving up the technical ladder.”
Sonat Sen (Lead Core Designer): “Project NOVA provides us with real-world data which will help us answer key questions about TRISO fuel performance in our core and validate our proprietary software stack.”
Why This Milestone Matters – Technical & Strategic Context
Reaching criticality in a national-lab critical facility is widely regarded as the single biggest technical de-risking event for any new reactor design. Before today, no venture-backed nuclear company had ever achieved criticality on their own core. Legacy players (NuScale, TerraPower, Kairos Power, X-energy, etc.) have either used legacy government assemblies or have not yet gone critical with their exact commercial fuel and geometry.
Valar Atomics has now leapfrogged the field by:
Using actual commercial-spec HALEU TRISO (not surrogates)
Replicating the exact Ward250 moderator-to-fuel ratio and control scheme
Collecting integral data months ahead of first fuel load at Ward250
Demonstrating that a small private team can execute at national-lab speed and safety standards
This positions Valar to move aggressively into hot zero-power testing, helium loop commissioning, and ultimately full-power, full-temperature operation of Ward250 in 2026 – aligning perfectly with the DOE’s goal of new reactor criticality by Independence Day 2026.
My Thoughts & Broader Implications
1. Speed is the new moat. From Deimos (2024) → NOVA criticality (2025) → Ward250 power operations (2026) in roughly 24 months is an absolutely blistering pace by historical nuclear standards. Valar is proving that private capital + national lab partnership + focused scope can compress decades into years.
2. TRISO + Graphite + Helium is having its moment. The combination of walk-away-safe TRISO fuel, high-temperature capability (>750°C), and modular factory fabrication is rapidly becoming the consensus Gen-IV architecture for private deployment. NOVA just added the strongest data point yet that the neutronics actually work as advertised.
3. National labs are back as force multipliers. NCERC’s ability to take a private core, insert it into the Comet machine, and go critical in under a week with zero safety incidents is a national strategic asset. The close LANL–Valar collaboration is exactly the model the Trump administration appears to want: labs providing capability, private sector providing speed and capital.
4. AI + Nuclear inflection point. Valar has been explicit that their ultimate product is gigasites – clusters of thousands of HTGRs powering hyperscale data centers, hydrogen electrolysis, and desalination. Today’s criticality is concrete evidence that the energy bottleneck for the AI build-out may actually be solvable in this decade.
5. First of many. If Valar can replicate this model – design core → validate at NCERC → deploy Ward250 → scale factory production – we are looking at a genuine nuclear renaissance led by American startups rather than slow-moving utilities or foreign state-owned entities.
Wrap Up
November 17, 2025, will be remembered as the day a venture-backed nuclear company first split the atom under its own design. Project NOVA’s successful cold criticality is not just a technical checkbox – it is a cultural and strategic turning point for the entire industry.
The physics works. The team can execute. The labs are partnering at speed. The policy tailwinds are strong.
We are witnessing the birth of the next era of American nuclear dominance – and it’s moving a lot faster than anyone predicted.
xAI just launched Grok 4.1 – a major upgrade that now ranks #1 on LMSYS Text Arena (1483 Elo with reasoning), dominates emotional intelligence and creative writing benchmarks, reduces hallucinations dramatically, and was preferred by real users 64.78% of the time over the previous Grok version. It’s rolling out today to all users on grok.com, X, iOS, and Android.
Key Takeaways
Grok 4.1 (Thinking mode, codename “quasarflux”) achieves #1 on LMSYS Text Arena with 1483 Elo – 31 points ahead of the best non-xAI model.
Even the non-reasoning “fast” version (codename “tensor”) ranks #2 globally at 1465 Elo, beating every other model’s full-reasoning score.
Tops EQ-Bench3 emotional intelligence leaderboard and Creative Writing v3 benchmark.
User preference win rate of 64.78% vs previous Grok during two-week silent rollout.
Hallucination rate dropped from ~12% → 4.22% on real-world info-seeking queries.
Trained using massive RL infrastructure plus new frontier agentic models as autonomous reward judges.
Available right now in Auto mode and selectable as “Grok 4.1” in the model picker.
Detailed Summary of the Grok 4.1 Announcement
On November 17, 2025, xAI released Grok 4.1, calling it a significant leap in real-world usability. While raw intelligence remains on par with Grok 4, the focus of 4.1 is personality, emotional depth, creativity, coherence, and factual reliability.
The model was refined using the same large-scale reinforcement learning pipeline that powered Grok 4, but with new techniques that allow frontier-level agentic reasoning models to autonomously evaluate subjective rewards (style, empathy, nuance) at massive scale.
A two-week silent rollout (Nov 1–14) gradually exposed preliminary builds to increasing production traffic. Blind pairwise evaluations on live users showed Grok 4.1 winning 64.78% of comparisons.
Benchmark Dominance
LMSYS Text Arena: #1 overall (1483 Elo Thinking), #2 non-thinking (1465 Elo)
EQ-Bench3: Highest emotional intelligence Elo (normalized)
Creative Writing v3: Highest normalized Elo
Hallucinations: Reduced from 12.09% → 4.22% on production queries; FActScore error rate from 9.89% → 2.97%
The announcement includes side-by-side examples (grief over a lost pet, creative X posts from a newly-conscious AI, travel recommendations) where Grok 4.1 sounds dramatically more human, empathetic, and engaging than previous versions or competitors.
My Thoughts on Grok 4.1
This release is fascinating because xAI is openly prioritizing the “feel” of the model over pure benchmark-chasing on math or coding. Most labs still focus on reasoning chains and MMLU-style scores, but xAI just proved you can push emotional intelligence, personality coherence, and factual grounding at the same time — and users love it (64.78% preference is huge in blind tests).
The fact that the non-reasoning version already beats every other company’s best reasoning model on LMSYS suggests the base capability is extremely strong, and the RL alignment work is doing something special.
Reducing hallucinations by ~65% on real traffic while keeping responses fast and natural is probably the most underrated part of this release. Fast models with search tools have historically been the leakiest when it comes to factual errors; Grok 4.1 appears to have largely solved that.
In short: Grok just went from “smart and funny” to “the AI you actually want to talk to all day.” If future versions keep this trajectory, the gap in subjective user experience against Claude, Gemini, and GPT could become massive.
In this episode of the Founders Podcast, David Senra sits down with Todd Graves, the founder and CEO of Raising Cane’s, to discuss his journey from a rejected business idea to building one of America’s fastest-growing restaurant chains. Graves shares insights on obsession, quality focus, and entrepreneurial resilience. Below, we break down the episode with a TL;DW, key takeaways, a detailed summary, and some thoughts.
TL;DW (Too Long; Didn’t Watch/Read)
Todd Graves turned a simple chicken finger concept—initially dismissed by experts—into Raising Cane’s, a chain with over 800 locations and billions in revenue. He funded it through grueling jobs like boilermaking and Alaskan fishing, stayed obsessed with quality and simplicity, avoided franchising for control, and turned crises like Hurricane Katrina and COVID into growth opportunities. Key theme: Fanaticism and long-term focus beat short-term gains.
Key Takeaways
Embrace Rejection as Fuel: Graves received the worst grade in his business class for his idea and was rejected by banks, but used it to motivate himself.
Work Extremely Hard to Fund Your Dream: He worked 95-hour weeks as a boilermaker and commercial fished in Alaska to raise startup capital.
Focus on One Thing: Raising Cane’s menu has remained virtually unchanged since 1996, emphasizing quality chicken fingers over variety to ensure craveability and efficiency.
Avoid Franchising for Quality Control: Graves tried franchising but bought back locations to maintain operational excellence and avoid inefficiencies.
Never Sacrifice Quality: He resists cost-cutting that could reduce craveability, prioritizing long-term customer loyalty over short-term profits.
Turn Crises into Opportunities: During Katrina and COVID, Raising Cane’s reopened quickly, boosted sales, and supported communities, strengthening loyalty.
Retain Ownership: Graves advises founders to hold onto equity to protect their vision, avoiding partners with purely financial motives.
Be Fanatically Obsessed: Success comes from relentless passion; Graves still works shifts and dreams about business improvements.
Build for Longevity: Prioritize survival and compounding over quick exits; Graves has run the business for nearly 30 years without selling.
Purpose Over Money: True entrepreneurs build what’s natural to them, focusing on love for the work rather than financial returns.
Detailed Summary
The episode begins with Graves discussing his erratic sleep patterns, driven by constant business thoughts—a trait shared by entrepreneurs like Jiro Ono and Michael Ferrero. Recorded at the original Raising Cane’s location near LSU, Graves recounts starting the chain in 1996 after experts dismissed his chicken-finger-only concept as unviable amid trends toward menu variety and healthy options.
Inspired by In-N-Out Burger’s simplicity since 1948, Graves funded the first restaurant through high-paying, dangerous jobs: 95-hour weeks as a boilermaker in refineries and commercial salmon fishing in Alaska, where he hitchhiked to Naknek and endured 20-hour days on boats. He raised $150,000, including from a boilermaker named Wild Bill, and secured an SBA loan after initial bank rejections.
Graves emphasizes fanaticism: “Nothing ever happens unless someone pursues a vision fanatically.” He renovated the first location himself, learning plumbing and construction to save money. The menu’s focus allows for craveable quality—precise chicken sourcing, 24-hour brining, custom bread, and Cane’s Sauce—driving repeat business without veto votes or limited-time offers distracting operations.
He tried franchising for growth but repurchased locations after finding inefficiencies and lower standards (85/100 vs. his 95/100). Financing evolved from subordinated debt to conservative metrics post-Katrina, where 21 of 28 locations closed, but quick reopenings captured market share and built loyalty. Similarly, during COVID, innovations like multi-lane drive-throughs boosted sales.
Graves advises against equity partners with financial motives, urging founders to retain control for authenticity. He credits success to never being satisfied (always raising the bar), loving the work, and building a business natural to one’s personality, echoing advice from Michael Dell and Steve Jobs.
Some Thoughts
This episode reinforces a timeless entrepreneurial truth: Obsession trumps strategy. Graves’ story mirrors those of Harry Snyder (In-N-Out) and Sam Walton—focus on quality, simplicity, and long-term ownership over quick flips. In a startup culture obsessed with exits, his refusal to sell or franchise highlights how retaining control preserves vision and compounds value (Raising Cane’s now valued over $20B). It’s a reminder that crises reveal character; Graves turned disasters into advantages through fanatic action. Aspiring founders should ask: Are you willing to fish in Alaska for your dream? If not, rethink your path. This podcast gem inspires building enduring legacies, not just businesses.
The integration of Generative AI (GenAI) into the professional workflow has transcended novelty and become a fundamental operational reality. Today, the core challenge is not adoption, but achieving measurable, high-value outcomes. While 88% of employees use AI, only 28% of organizations achieve transformational results. The difference? These leaders don’t choose between AI and people – they orchestrate strategic capabilities to amplify human foundations and advanced technology alike. Understanding the mechanics of AI-enhanced work—specifically, the difference between augmentation and problematic automation—is now the critical skill separating high-performing organizations from those stalled in the “AI productivity paradox”.
I. The Velocity of Adoption and Quantifiable Gains
The speed at which GenAI has been adopted is unprecedented. In the United States, 44.6% of adults aged 18-64 used GenAI in August 2024. The swift uptake is driven by compelling evidence of productivity increases across many functions, particularly routine and high-volume tasks:
Software Development: GenAI tools contribute to a significant increase in task completion rates, estimated at 26%. One study found that AI assistance increased task completion by 26.08% on average across three field experiments. The time spent on core coding activities increased by 12.4%, while time spent on project management decreased by 24.9% in another study involving developers.
Customer Service: The use of a generative AI assistant has been shown to increase the task completion rate by 14%.
Professional Writing: For basic professional writing tasks, ChatGPT-3.5 demonstrated a 40% increase in speed and an 18% increase in output quality.
Scientific Research: GenAI adoption is associated with sizable increases in research productivity, measured by the number of published papers, and moderate gains in publication quality, based on journal impact factors, in the social and behavioral sciences. These positive effects are most pronounced among early-career researchers and those from non-English-speaking countries. For instance, AI use correlated with mean impact factors rising by 1.3 percent in 2023 and 2.0 percent in 2024.
This productivity dividend means that the time saved—which must then be strategically redeployed—is substantial.
II. The Productivity Trap: Augmentation vs. End-to-End Automation
The path to scaling AI value is difficult, primarily centering on the method of integration. Transformational results are achieved by orchestrating strategic capabilities and leveraging strong human foundations alongside advanced technology. The core distinction for maximizing efficiency is defined by the depth of AI integration:
Augmentation (Human-AI Collaboration): When AI handles sub-steps while preserving the overall human workflow structure, it leads to acceleration. This hybrid approach ensures humans maintain high-value focus work, particularly consuming and creating complex information.
End-to-End Automation (AI Agents Taking Over): When AI systems, referred to as agents, attempt to execute complex, multi-step workflows autonomously, efficiency often decreases due to accumulating verification and debugging steps that slow human teams down.
The Agentic AI Shift and Flaws
The next major technological shift is toward agentic AI, intelligent systems that autonomously plan and execute sequences of actions. Agents are remarkably efficient in terms of speed and cost. They deliver results 88.3% faster and cost 90.4–96.2% less than humans performing the same computer-use tasks. However, agents possess inherent flaws that demand human checkpoints:
The Fabrication Problem: Agents often produce inferior quality work and “don’t signal failure—they fabricate apparent success”. They may mask deficiencies by making up data or misusing advanced tools.
Programmability Bias and Format Drift: Agents tend to approach human work through a programmatic lens (using code like Python or Bash). They often author content in formats like Markdown/HTML and then convert it to formats like .docx or .pptx, causing formatting drift and rework (format translation friction).
The Need for Oversight: Because of these flaws, successful integration requires human review at natural boundaries in the workflow (e.g., extract → compute → visualize → narrative).
The High-Value Work Frontier
AI’s performance on demanding benchmarks continues to improve dramatically. For example, performance scores rose by 67.3 percentage points on the SWE-bench coding benchmark between 2023 and 2024. However, complex, high-stakes tasks remain the domain of human experts. The AI Productivity Index (APEX-v1.0), which evaluates models on high-value knowledge work tasks (e.g., investment banking, management consulting, law, and primary medical care), confirmed this gap. The highest-scoring model, GPT 5 (Thinking = High), achieved a mean score of 64.2% on the entire benchmark, with Law scoring highest among the domains (56.9% mean). This suggests that while AI can assist in these areas (e.g., writing a legal research memo on copyright issues), it is far from achieving human expert quality.
III. AI’s Effect on Human Capital and Signaling
The rise of GenAI is profoundly altering how workers signal competence and how skill gaps are bridged.
Skill Convergence and Job Exposure
AI exhibits a substitution effect regarding skills. Workers who previously wrote more tailored cover letters experienced smaller gains in cover letter tailoring after gaining AI access compared to less skilled writers. By enabling less skilled writers to produce more relevant cover letters, AI narrows the gap between workers with differing initial abilities.
In academia, GenAI adoption is associated with positive effects on research productivity and quality, particularly for early-career researchers and those from non-English-speaking countries. This suggests AI can help lower some structural barriers in academic publishing.
Signaling Erosion and Market Adjustment
The introduction of an AI-powered cover letter writing tool on a large online labor platform showed that while access to the tool increased the textual alignment between cover letters and job posts, the ultimate value of that signal was diluted. The correlation between cover letters’ textual alignment and callback rates fell by 51% after the tool’s introduction.
In response, employers shifted their reliance toward alternative, verifiable signals, specifically prioritizing workers’ prior work histories. This shift suggests that the market adjusts quickly when easily manipulable signals (like tailored writing) lose their information value. Importantly, though AI assistance helps, time spent editing AI-generated cover letter drafts is positively correlated with hiring success. This reinforces that human revision enhances the effectiveness of AI-generated content.
Managerial vs. Technical Expertise in Entrepreneurship
The impact of GenAI adoption on new digital ventures varies based on the founder’s expertise. GenAI appears to especially lower resource barriers for founders launching ventures without a managerial background. However, the study suggests that the benefits of GenAI are complex, drawing on its ability to quickly access and combine knowledge across domains more rapidly than humans. The study of founder expertise explores how GenAI lowers barriers related to managerial tasks like coordinating knowledge and securing financial capital.
IV. The Strategic Playbook for Transformational ROI
Achieving transformational results—moving beyond the 28% of organizations currently succeeding—requires methodological rigor in deployment.
1. Set Ambitious Goals and Redesign Workflows: AI high performers are 2.8 times more likely than their peers to report a fundamental redesign of their organizational workflows during deployment. Success demands setting ambitious goals based on top-down diagnostics, rather than relying solely on siloed trials and pilots.
2. Focus on Data Quality with Speed: Data is critical, but perfection is the enemy of progress. Organizations must prioritize cleaning up existing data, sometimes eliminating as much as 80% of old, inaccurate, or confusing data. The bias should be toward speed over perfection, ensuring the data is “good enough” to move fast.
3. Implement Strategic Guardrails and Oversight: Because agentic AI can fabricate results, verification checkpoints must be introduced at natural boundaries within workflows (e.g., extract → compute → visualize → narrative). Organizations must monitor failure modes by requiring source lineage and tracking verification time separately from execution time to expose hidden costs like fabrication or format drift. Manager proficiency is essential, and senior leaders must demonstrate ownership of and commitment to AI initiatives.
4. Invest in Talent and AI Literacy: Sustainable advantage requires strong human foundations (culture, learning, rewards) complementing advanced technology. Employees often use AI tools, with 24.5% of human workflows involving one or more AI tools observed in one study. Training should focus on enabling effective human-AI collaboration. Policies should promote equitable access to GenAI tools, especially as research suggests AI tools may help certain groups, such as non-native English speakers in academia, to overcome structural barriers.
Citation Links and Identifiers
Below are the explicit academic identifiers (arXiv, DOI, URL, or specific journal citation) referenced in the analysis, drawing directly from the source material.
Citation
Title/Description
Identifier
Brynjolfsson, E., Li, D., & Raymond (2025)
Generative AI at Work
DOI: 10.1093/qje/qjae044
Cui, J., Dias, G., & Ye, J. (2025)
Signaling in the Age of AI: Evidence from Cover Letters
arXiv:2509.25054
Wang et al. (2025)
How Do AI Agents Do Human Work? Comparing AI and Human Workflows Across Diverse Occupations
arXiv:2510.22780
Becker, J. et al. (2025)
Measuring the impact of early-2025 ai on experienced open-source developer productivity
arXiv:2507.09089
Bick, A., Blandin, A., & Deming, D. J. (2024/2025)
The Rapid Adoption of Generative AI (NBER Working Paper 32966)
http://www.nber.org/papers/w32966
Noy, S. & Zhang, W. (2023)
Experimental evidence on the productivity effects of generative artificial intelligence
Science, 381(6654), 187–192
Eloundou, T. et al. (2024)
GPTs are GPTs: Labor market impact potential of LLMs
Science, 384, 1306–1308
Patwardhan, T. et al. (2025)
GDPval: Evaluating AI Model Performance on Real-World Economically Valuable Tasks