PJFP.com

Pursuit of Joy, Fulfillment, and Purpose

Tag: AI investing

  • Howard Marks on Why Most Investors Lose, the AI Bubble, India, and the Hunt for the $10 Bill Nobody Picked Up

    TLDW

    Howard Marks, co-founder of Oaktree Capital and the author of the memos every serious investor reads first, sat down with Nikhil Kamath for a wide-ranging conversation on his 50+ year career, the philosophy of Mujo (the inevitability of change), why he chose bonds over stocks, the difference between drifting down the river and seeing it, where we sit in the current cycle, AI as both threat and opportunity, why active management lost to indexation, and why the only way to outperform in a world full of smart, motivated, computer-literate competitors is “superior insight.” His core message: investing is a puzzle that cannot be solved by formula, and the only edge that lasts is being more right than the other person, more often, with the discipline to stay calm when everyone else is panicking or partying.

    Key Takeaways

    • Mujo is the operating system. Marks took Japanese literature at Wharton and walked away with one idea that shaped his whole career: change is inevitable, unpredictable, and uncontrollable. You cannot predict the future, but you can prepare for it.
    • Cycles are excesses and corrections, not ups and downs. The S&P 500 has averaged about 10% per year for 100 years, but it is almost never between 8% and 12% in any given year. The norm is not the average. Greed and fear push the pendulum past equilibrium every time.
    • The recovery is two years older. When asked where we are in the cycle, Marks notes the bull market continued from April 2024 through January 2026, so by definition we are deeper into the cycle, with a recovery distorted by the unique man-made COVID recession.
    • Drifting versus seeing the river. Marks describes the first 35 years of his career (roughly age 14 to 49) as drifting. Starting Oaktree in 1995 was the first truly intentional decision he made. Entrepreneurship forced proactivity on him.
    • Why bonds over equities. The contractual, predictable nature of debt suited his conservative temperament (his parents were adults during the Depression). He was not voluntarily moved to bonds in 1978; a boss reassigned him just in time for the birth of the high-yield bond market.
    • Distressed debt is the bigger story. Bruce Karsh joined in 1987 and has run roughly $70 billion in distressed debt since 1988, with profits well over 90% of the total profit and loss.
    • Excess return is getting paid more than the risk warrants. If the market thinks a borrower has a 5% default probability and you correctly conclude it is 2%, you collect interest priced for 5% risk while taking 2% risk. That gap is the alpha.
    • Oaktree’s default rate is about a third of the market. Over 40 years, roughly 3.6% to 3.7% of high-yield bonds default each year. Oaktree’s rate is roughly one-third of that, achieved through process discipline, institutional memory, and analysts who stay analysts for life.
    • If you are starting a career today, understand AI. Marks says the investor who will make the most money over the next 10 years is the one who best understands AI and its capabilities, whether they bet for or against it.
    • AI is excellent at pattern matching, but cannot create new patterns. Can AI pick the Amazon out of five business plans? The Steve Jobs out of five CEOs? Marks bets no. Most humans cannot either, which means there is still a role for exceptional people.
    • Indexation won because active management lost. Passive did not become dominant because it is brilliant. It dominated because most active managers failed and charged high fees for the privilege.
    • Bad times create openings for active managers, but most cannot take them. Panic drives prices down, but the same panic prevents most investors from buying. Wally Deemer: when the time comes to buy, you will not want to.
    • The job is simple but not easy. Find the best managers, the best companies, the best ideas. Charlie Munger told Marks: anyone who thinks it is easy is stupid.
    • Where is the $10 bill nobody picked up? Marks thinks it is around AI, but only for those with insight above the average. If you are average and you crowd into AI, you get average results in a bull case and worse in a bear case.
    • Quantitative information about the present cannot produce alpha. Andrew Marks (howards son) pointed this out to his father during the COVID lockdown. Everyone has the same data. Outperformance has to come from somewhere else.
    • Buffett’s edge was reading Moody’s Manuals when nobody else would. The pre-internet research process favored those willing to do tedious work alone. The format of the edge changes; the fact that edge requires doing what others will not, does not.
    • You cannot coach height. Marks can tell you that second-level thinking, contrarian insight, and the ability to evolve at 80 are essential. He cannot tell you how to acquire any of them.
    • India: Marks declines to opine. He has deployed roughly $4 billion in India but refuses to claim expertise on the Indian stock market or recommend a sector.
    • History rhymes. Marks credits Mark Twain. The lessons that repeat are lessons of human nature, which changes incredibly slowly.
    • Investing is a puzzle, not dentistry. Quoting Taleb, Marks observes that engineers and dentists succeed by repeating the right answer. Investors face a problem with no certain solution. If you need to be right every time, do not become an investor.

    Detailed Summary

    From Queens to Wharton: The Accidental Investor

    Howard Marks grew up in Queens, New York, in a middle-class family. Neither of his parents went to college, but his father was an intelligent accountant. Marks discovered accounting in high school, fell in love with its orderliness, and chose Wharton because he was told it was the best undergraduate business school in America. Wharton required a literature class in a foreign country and a non-business minor. For reasons he no longer remembers, Marks chose Japanese studies, then took Japanese civilization and Japanese art. He calls it the most important academic decision of his life because of one concept he encountered: Mujo.

    Mujo, Independence of Events, and Why You Cannot Predict

    Mujo, the turning of the wheel of the law, teaches that change is inevitable, unpredictable, and uncontrollable, and that humans must accommodate it rather than try to control it. Marks pairs this with his deep belief in the independence of events: ten heads in a row do not change the odds on flip eleven. Roughly 20 years ago he wrote a memo titled “You Can’t Predict. You Can Prepare.” A portfolio cannot be optimized for both extreme upside and extreme downside, but it can be built to perform respectably across many possible futures, if you suboptimize for the middle of the probability distribution.

    Why Cycles Exist

    If GDP averages 2% growth, why is it never simply 2%? Marks’s answer is excesses and corrections. Optimism leads producers to overbuild and consumers to overspend, growth runs above trend, then satiation and oversupply pull it back below trend. The S&P 500 averages 10% per year over a century, but the return in any given year is almost never between 8% and 12%. The norm is not the average because human beings are not average; they are alternately greedy and fearful.

    Where Are We Now?

    Two years ago Marks told the Norwegian Sovereign Wealth Fund’s Nicolai Tangen that we were near the middle of the cycle. Two years later, the bull market in stocks continued through January 2026, so by simple math the recovery is older. The COVID recession was a man-made anomaly: one quarter of negative growth followed by the best quarter in history, triggered by a deliberate global shutdown rather than by accumulated excess. That distorts every traditional cycle metric.

    Drifting Versus Seeing the River

    One of the most personal moments in the conversation is Marks’s confession that he drifted for the first 35 years of his career. He did not pick his career, his first job, or his transition from equities to bonds in any deliberate way. Other people pushed him; he said yes. The first proactive decision of his life was co-founding Oaktree in 1995 at age 49, and even that came largely because his wife and his partner Bruce Karsh pushed him into it. Once he had to lead, he had to be intentional. Leadership cannot be passive.

    The Bond Decision

    Marks did not choose bonds; bonds chose him. In May 1978 his boss at Citibank moved him to the bond department to start a convertible fund. Three months later another phone call asked him to figure out something called high-yield bonds being run by a guy in California named Milken. Marks said yes both times. He arrived at the front of the line for high-yield in 1978 and has been there for 48 years.

    The conservative temperament fit. Marks’s parents were adults during the Depression, so he grew up hearing “don’t put all your eggs in one basket” and “save for a rainy day.” Bonds offered contractual, predictable returns. The phrase “junk bonds” was a bias that made the asset class cheaply available to anyone willing to do the analytical work.

    Distressed Debt and Excess Return

    When Bruce Karsh joined in 1987, Oaktree launched what Marks believes was the first distressed debt fund from a mainstream institution. Karsh has managed about $70 billion since 1988 with well over 90% of the total being profit. The core skill is predicting default probability better than the market. If consensus prices a borrower at a 5% default risk and you correctly assess 2%, the interest you receive is overpaid relative to actual risk. Marks calls this “excess return” and credits Mike Milken with the foundational insight: lend to borrowers others will not, demand interest beyond what compensates you, and the math works.

    Over 40 years, roughly 3.6% to 3.7% of high-yield bonds default annually on average. Oaktree’s default rate has been roughly one-third of that. Marks credits institutional culture (analysts who stay analysts for life), psychological stability in volatile periods, and a process that forces every analyst to ask the same eight questions of every company every time. In equity research, you can buy a stock for great management without examining the product, or for a great product without examining the management. In Oaktree’s bond process, you cover every base every time.

    Beginning a Career Today: The AI Question

    Asked what he would do today, Marks says the front of the line is AI. The investor who will succeed most over the next decade is the one who best understands AI, whether they bet for or against it. He notes that he was shocked by his own experience using Claude, but adds that he has not fired a single person and does not intend to.

    His view: AI excels at extracting patterns from history and applying them with discipline and without psychological wobble. But investing also requires creating new patterns. Can AI sit with five business plans and identify the future Amazon? Can it sit with five CEOs and pick Steve Jobs? Marks bets not. Then he adds the killer line: most humans cannot either. Which means the role for exceptional humans survives, but the bar gets higher.

    Why Indexation Won

    When Marks went to graduate school at the University of Chicago in 1968, his professor pointed out that most mutual funds underperformed the S&P after fees. Index funds did not exist yet; Jack Bogle launched the first one in 1974. Today, most equity mutual fund capital is passive. Marks’s controversial take: indexation did not win because it is great. It won because active management was so bad and so expensive. Even at equal fees, if active decisions are inferior, passive wins.

    Bad times create openings for active managers because panic drives prices down, but the same panic prevents most people from buying. Marks quotes the old trader Wally Deemer: when the time comes to buy, you will not want to. The advantage of an AI nudge that says “this is one of those moments, get your ass in gear and buy something” might genuinely add value, because it removes the emotion.

    Second-Level Thinking and Why You Cannot Coach It

    Marks’s first book, The Most Important Thing, has 21 chapters, each titled “The Most Important Thing Is…” Each one is different because so many things matter. The chapter on second-level thinking came to him spontaneously while writing a sample chapter for Columbia University Press. The argument is simple: if you think like everyone else, you act like everyone else, and you get the same results. To outperform, you must deviate from the herd and be more right than the herd. Different is not enough. Different and better is the bar.

    Can AI become a contrarian thinker? You can prompt Claude to give you only non-consensus answers, but the catch is that consensus is often close to right because the people building consensus are intelligent, educated, computer-literate, and motivated. Forcing non-consensus often forces wrong. The real edge is being non-consensus AND correct, which is a much narrower target.

    The $10 Bill That Nobody Has Picked Up

    Marks references the joke about the efficient market hypothesis: there is no $10 bill on the sidewalk because if there were, somebody would have already picked it up. He then concedes that the bill is probably around AI today, but only for those whose insight rises above the average. If you are average and you crowd into AI, you go along with the tide if it works and get crushed if it does not. Quoting Garrison Keillor’s Lake Wobegon, “where all the children are above average,” Marks notes that the math does not allow it. Most investors will not be above average, and acknowledging that is the first step toward becoming one of the few who are.

    Learning From Andrew, Buffett, and Onion-Skin Manuals

    Marks lived with his son Andrew during COVID and wrote a memo about it called “Something of Value” in January 2021. Andrew’s most important contribution was a near-revelation: readily available quantitative information about the present cannot be the source of investment alpha because everyone has it. Buffett’s edge in the 1950s was reading Moody’s Manuals (giant books printed on onion-skin paper with tiny type and zero narrative) when nobody else would. The medium changes; the principle that edge requires doing what others will not, does not.

    India

    Kamath asks Marks directly about India. Marks has deployed roughly $4 billion there but politely declines to claim any expertise on the Indian stock market or recommend a sector. He cautions Kamath about taking advice from people who do not know what they are talking about, and includes himself in that category on the question of India. The honesty is striking and is itself an investment lesson.

    History Rhymes, and Final Advice

    Marks reads Andrew Ross Sorkin’s 1929 and references it in an upcoming memo on private credit. He likes Mark Twain’s reputed line that history does not repeat but it rhymes, and Napoleon’s line that history is written by the winners of tomorrow. The lessons that rhyme are lessons of human nature, which evolves incredibly slowly. Fight or flight from the watering hole still drives behavior in financial markets.

    His final advice: investing is a puzzle, not engineering. A civil engineer calculates steel and concrete, builds the bridge, and the bridge stands. Every time. A dentist fills the cavity correctly and it stays filled. Every time. If you need that kind of reliability in your work, become a dentist. Investing is the act of positioning capital for a future that cannot be predicted accurately. You will be wrong sometimes. If something in your makeup cannot tolerate being wrong sometimes, do not become an investor. The puzzle has no final solution, which is exactly what makes it endlessly interesting.

    Thoughts

    The most useful thing Marks does in this conversation is admit, repeatedly and without ego, what he does not know. He does not know whether AI models differ in real intelligence. He does not know which sector in India to bet on. He does not know how to teach second-level thinking. He drifted for 35 years and only began making intentional decisions at 49. This honesty is the inverse of every guru selling certainty, and it is the actual content of the lesson he is trying to convey: epistemic humility is the precondition for superior insight, because you cannot acquire what you already think you have.

    The deepest insight in the conversation might be the one Andrew Marks (Howard’s son) gave his father during COVID: readily available quantitative information about the present cannot produce alpha because everyone has it. This is devastating in the AI era. If everyone is asking the same large language model the same question, the answers converge, and convergence is consensus, and consensus does not pay. The arms race for proprietary data, novel framings, and unconventional questions is the only thing that can break the convergence.

    Marks’s framing of cycles as excesses and corrections rather than ups and downs is genuinely useful. It reframes volatility from something to fear into something to expect, and reframes the question from “where are we going?” to “how far past trend have we already gone?” The 8 to 12 percent observation about the S&P (that the average return is almost never the actual return) is the kind of fact that should be taught in every introductory finance class but is almost never mentioned.

    The most contrarian claim in the conversation is the one about indexation: that it won because active was bad, not because passive is great. This is a useful inversion. Most defenders of passive investing argue from efficient market theory; Marks argues from the empirical failure of active managers. The implication is that if you can find the small population of active managers who genuinely outperform, the indexation argument falls apart for that subset. Most cannot. The hardest job in investing is the meta-job of identifying the few who can.

    The exchange about AI as a contrarian engine is one of the most clarifying short discussions of AI’s investment limits I have read. Different from consensus is easy. Different and better is the actual goal. Forcing different gets you wrong more often than right because consensus, built by smart, motivated, educated competitors, is usually close to correct. This is why “use AI to find non-consensus ideas” is a worse strategy than it sounds.

    Finally, the Buffett-Moody’s-Manual story is the most quietly profound moment in the interview. The edge in 1955 was the willingness to read tiny type on onion-skin paper alone in an office in Omaha when no one else would. The edge in 2026 is whatever the modern equivalent of that is, and the only honest answer is: nobody knows yet, which is precisely why finding it is worth so much money.

  • Elad Gil on the AI Frontier: Compute Constraints, the Personal IPO, and Why Most AI Founders Should Sell in the Next 12 to 18 Months

    Elad Gil sat down with Tim Ferriss for a wide ranging conversation that pairs almost perfectly with his recent Substack post Random thoughts while gazing at the misty AI Frontier. Together, the podcast and the post lay out the cleanest framework I have seen for what is actually happening in AI right now: a Korean memory bottleneck capping every lab, a class wide personal IPO across the research community, the fastest revenue ramps in capitalist history, and a brutal dot com style culling that most founders do not yet want to admit is coming. Below is a complete breakdown.

    TLDW (Too Long, Didn’t Watch)

    Elad Gil argues that AI is producing the fastest revenue ramps in capitalist history while setting up the same brutal power law that wiped out 99 percent of dot com companies. OpenAI and Anthropic each sit at roughly 0.1 percent of US GDP today, on a path to 1 percent of GDP run rate by end of 2026, which is insanely fast by any historical standard. The current ceiling on capabilities is not chips but Korean high bandwidth memory, and that constraint will likely hold all major labs roughly comparable in capability through 2028. Talent has just experienced a class wide personal IPO via Meta led bidding, with packages running tens to hundreds of millions per researcher. Most AI companies should consider exiting in the next 12 to 18 months while the tide is high. Right now consensus is correct. Save the contrarianism for later.

    Key Takeaways

    • OpenAI and Anthropic are each at roughly 0.1 percent of US GDP. With US GDP near 30 trillion dollars and each lab at a roughly 30 billion dollar revenue run rate, AI has gone from essentially zero to 0.25 to 0.5 percent of GDP in just a few years. If the labs hit 100 billion in run rate by year end 2026 (which many expect), AI hits 1 percent of GDP run rate inside a single year.
    • The AI personal IPO is real. 50 to a few hundred AI researchers across multiple companies just experienced a class wide IPO event due to Meta led bidding, with top packages reportedly tens to hundreds of millions per person. The closest historical analog is early crypto holders around 2017.
    • The bottleneck is Korean memory, not Nvidia chips. High bandwidth memory from Hynix, Samsung, Micron, and others is the binding constraint. Expected to hold roughly two years. After that, power and data center buildout become the next walls.
    • No lab can pull dramatically ahead before 2028. Because every lab is compute constrained on the same input, OpenAI, Anthropic, Google, xAI, and Meta should remain roughly comparable in capability through that window, absent an algorithmic breakthrough that stays inside one lab.
    • Compute is the new currency. Token budgets now define what an engineer can accomplish, what a company can spend, and what business models are viable. Some companies (neoclouds, Cursor) are effectively inference providers disguised as tools.
    • The dot com base rate is the AI base rate. Around 1,500 to 2,000 companies went public in the late 1990s internet cycle. A dozen or two survived. AI will likely look the same.
    • Most AI founders should consider selling in the next 12 to 18 months. If you are not in the durable handful, this is your value maximizing window. A handful of companies (OpenAI, Anthropic) should never sell.
    • Buyers are bigger than ever. One percent of a 3 trillion dollar market cap is 30 billion dollars. That math makes massive AI acquisitions trivial for hyperscalers, vertical incumbents, and adjacent giants.
    • Underrated exit path: merger of equals. Two private AI competitors destroying each other on price should consider just merging. PayPal and X.com did exactly this in the 1990s.
    • 91 percent of global AI private market cap sits in a 10 by 10 mile square. If you want to do AI, move to the Bay Area. Remote work for cluster industries is BS.
    • Want money? Ask for advice. Want advice? Ask for money. The inverse also works: offering useful advice frequently leads to inbound investment opportunities.
    • AI is selling units of labor, not software. The shift is from selling seats and tools to selling cognitive output. This is why Harvey can win in legal, where decades of legal SaaS failed.
    • AI eats closed loops first. Tasks that can be turned into testable closed loop systems (code, AI research) get automated fastest. Map jobs on a 2×2 of closed loop tightness vs economic value to see where AI hits soonest.
    • Headcount will flatten at later stage companies. Multiple late stage CEOs told Elad they will not do big AI layoffs but will simply stop growing headcount even as revenue grows 30 to 100 percent. Hidden layoffs are also hitting outsourcing firms in India and the Philippines first.
    • The Slop Age could be the golden era of AI plus humanity. AI produces useful slop at volume, humans desloppify it, leverage is high, and the work is fun. This window may close as AI gets superhuman.
    • Market first, team second (90 percent of the time). Great teams die in bad markets. The exception is when you meet someone truly exceptional at the very earliest stage.
    • The one belief framework. If your investment memo needs three core beliefs to be true, it is too complicated. Coinbase was an index on crypto. Stripe was an index on e-commerce. That was the entire memo.
    • The four year vest is a relic. It exists because in the 1970s companies actually went public in four years. Today the private window has stretched to 20 years and venture has eaten what used to be public market growth investing.
    • Boards are in-laws. You cannot fire investor board members. Take a worse price for a better board member, because as Naval Ravikant said, valuation is temporary, control is forever.
    • Right now, consensus is correct. Save the contrarianism. The smart move is to just buy more AI exposure rather than try to outsmart the obvious.
    • Distribution wins more than founders admit. Google paid hundreds of millions to push the toolbar. Facebook bought ads on people’s own names in Europe. TikTok spent billions on user acquisition. Allbirds (yes, the shoe company) just raised a convert to build a GPU farm.
    • Anti-AI sentiment will get worse before it gets better. Maine banned new data centers. There has been violence directed at AI leaders. Expect more political and activist backlash, especially as AI is blamed for harms it has not yet caused while its benefits are mismeasured.
    • Use AI as a cold reader. Elad uploads photos of founders to AI models with cold reading prompts and reports surprisingly accurate personality assessments based on micro features.

    Detailed Summary

    The Numbers Are Insane and Mostly Underappreciated

    The most stunning data point in either source is the GDP math. US GDP is roughly 30 trillion dollars. OpenAI and Anthropic are each rumored to be at roughly 30 billion dollars in revenue run rate, putting each one at 0.1 percent of US GDP. Add cloud AI revenue and the picture gets stranger: AI has grown from essentially zero to between 0.25 and 0.5 percent of GDP in only a few years. If the labs hit 100 billion in run rate by year end 2026, AI will be at roughly 1 percent of GDP run rate inside a single year. There is no historical analog for that pace. Elad notes that productivity gains from AI may end up mismeasured the way internet productivity was undercounted in the 2000s, which would have downstream consequences for regulation: AI gets blamed for the bad (job losses) and credited for none of the good (new jobs, education gains, healthcare improvements). His half joking aside is that the real ASI test may be the ability to actually measure AI’s economic impact.

    The AI Personal IPO

    The most underdiscussed phenomenon in AI right now, according to Elad, is what he calls a class wide personal IPO. When a company IPOs, a subset of employees become wealthy, lose focus, and either start companies, get into politics, fund passion projects, or check out. Meta started aggressively bidding for AI talent. Other major labs had to match. The result was 50 to a few hundred researchers, scattered across multiple labs, suddenly receiving compensation in the tens to hundreds of millions of dollars range. The only historical analog Elad can think of is early crypto holders around 2017. Some chunk of these newly wealthy researchers will redirect attention to AI for science, side projects, or quiet quitting. The aggregate field stays mission aligned, but the distribution of attention has shifted.

    The Korean Memory Bottleneck

    Every major AI lab today is building giant Nvidia clusters paired with high bandwidth memory primarily from Korean fabs and a few other suppliers. They run massive amounts of data through these clusters for months, and the output is, almost absurdly, a single flat file containing what amounts to a compressed version of human knowledge plus reasoning. Right now, the binding constraint on this whole stack is HBM memory from Hynix, Samsung, Micron, and others. Korean memory fab capacity has been below the capacity of every other piece of the system. Elad estimates this constraint persists for roughly two years. After that, the next walls are likely data center construction and power. The strategic implication is enormous. While memory constrains everyone, no single lab can buy 10x the compute of its rivals, so capabilities should stay roughly comparable across the major labs. Once that constraint lifts, possibly around 2028, one player could theoretically pull dramatically ahead, especially if AI assisted AI research closes a self improvement loop inside one lab.

    Compute Is the New Currency

    The blog post sharpens a framing that runs throughout the podcast: compute, denominated in tokens, is now a unit of economic value. Token budgets define what an engineer can accomplish, what a company can spend, and what business models work. Some companies are effectively inference providers wearing tool costumes. Neoclouds are the cleanest example. Cursor is another, subsidizing inference as a user acquisition strategy. The most absurd recent example: Allbirds, the shoe company, raised a convertible to build a GPU farm. Whether this becomes the AI version of Microstrategy’s Bitcoin trade or a cautionary tale, it tells you where the cost of capital believes the next decade is going.

    The Dot Com Survival Math

    Elad walks through the brutal arithmetic that AI founders should be internalizing. In the late 1990s and early 2000s, somewhere between 1,500 and 2,000 internet companies went public. Of those, roughly a dozen or two survived in any meaningful form. Every cycle has looked like this: automotive in the early 1900s, SaaS, mobile, crypto. There is no reason AI will be different. Most current AI companies, including those ramping revenue today, will see the market, competition, and adoption turn on them. The question every AI founder should be asking is whether they are in the durable handful or not.

    Most AI Companies Should Consider Exiting in the Next 12 to 18 Months

    This is the most actionable and most uncomfortable take in either source. While the tide is rising, every AI company looks unstoppable. Whether they actually are, in a 10 year frame, is a separate question. Founders running successful AI companies should take a cold honest look at whether the next 12 to 18 months is their value maximizing window. Companies typically have a 6 to 12 month peak before some headwind hits, often visible in the second derivative of growth. The best signal that you should sell is when growth rate is starting to plateau and you can see why. A handful of companies (OpenAI, Anthropic, the durable winners) should never exit. Many others should, while everything is still on the upswing.

    What Makes an AI Company Durable

    Elad lays out four lenses for evaluating durability at the application layer:

    1. Does your product get dramatically better when the underlying model gets better, in a way that keeps customers loyal?
    2. How deep and broad is the product? Are you building multiple integrated products embedded in actual workflows?
    3. Are you embedded in real change management at the customer? AI adoption is mostly a workflow change problem, not a tech problem. Workflow embedding is durable.
    4. Are you capturing and using proprietary data in a way that creates a system of record? Data moats are often overstated, but sometimes real.

    At the lab layer, Elad believes OpenAI, Anthropic, and Google are durable absent disaster. He predicted three years ago that the foundation model market would settle into an oligopoly aligned with cloud, and that prediction has roughly held.

    Selling Work, Not Software

    The deepest structural insight in the conversation is that generative AI is shifting what software companies sell. The old model was selling seats, tools, and SaaS subscriptions. The new model is selling units of cognitive labor. Zendesk sold seats to support reps. Decagon and Sierra sell agentic support output. Harvey can win in legal even though selling to law firms was historically considered terrible business, because Harvey is not selling tools, it is augmenting lawyer output. This shift opens markets that were previously closed and dramatically grows tech TAMs. It is also why founder limited theories of entrepreneurship currently understate how many opportunities exist.

    AI Eats Closed Loops First

    One of the cleanest mental models in the blog post is the closed loop framework. AI automates first what can be turned into a testable closed loop. Code is the canonical example: outputs can be tested, errors detected, models can iterate. AI research is similar. Both have tight feedback loops and high economic value, which puts them at the top of the AI impact ranking. Map jobs on a 2×2 of closed loop tightness vs economic value and you can see where AI hits soonest. The interesting forward question is which jobs become more closed loop next. Data collection and labeling will keep growing in every field as a result.

    The Harness Matters More Than People Think

    For coding tools and increasingly for enterprise applications, what Elad calls the harness, the wrapper of UX, prompting, workflow integration, and brand around the underlying model, is becoming sticky. It is not just which model you call. It is the environment built around it. Cursor and Windsurf demonstrate this in coding. The interesting open questions are what the harness looks like for sales AI, for AI architects, for analyst workflows. Those gaps leave room for startups even as model capabilities converge.

    Hidden Layoffs and the Developing World

    Most announced AI driven layoffs are probably just COVID era overhiring corrections wrapped in a more flattering narrative. But real AI driven labor displacement is happening, and it is hitting outsourcing firms first. That means countries like India and the Philippines, where many outsourced services jobs sit, are likely to be the most impacted earliest. Several developing economies built their growth ladders on services exports. If AI takes those jobs first, the migration and economic patterns of the next decade may shift in ways nobody is yet planning for.

    The Flat Company

    Multiple late stage CEOs told Elad they will not announce big AI layoffs. Instead, they will simply stop growing headcount. If revenue grows 30 to 100 percent, headcount stays flat or shrinks via attrition. Existing employees become dramatically more productive. The very best people who can leverage AI will see compensation inflate. Sales and some growth engineering keep hiring. Almost everything else flatlines. This is mostly a later stage and public company phenomenon. True early stage startups should still scale aggressively after product market fit, just with more leverage per person.

    Exit Options for AI Founders

    Elad lays out four exit categories. First, the labs and hyperscalers themselves: Apple, Amazon, Google, Microsoft, Meta. Second, vertical incumbents like Thomson Reuters for legal or healthcare giants for clinical AI. Third, the underrated category of merger of equals between two private AI competitors who are currently destroying each other on price. PayPal and X.com did this in the 1990s. Uber and Lyft reportedly almost did. Fourth, large adjacent tech companies: Oracle, Samsung, Tesla, SpaceX, Snowflake, Databricks, Stripe, Coinbase. The market cap math has changed in a way that makes acquisition trivial. One percent of a three trillion dollar market cap is 30 billion dollars, which means a hyperscaler can do massive acquisitions almost casually.

    Geographic Concentration Is Extreme

    Elad’s team analyzed where private market cap aggregates. Historically half of global tech private market cap sat in the US, with half of that in the Bay Area. With AI, 91 percent of global AI private market cap is in a single 10 by 10 mile square in the Bay Area. New York is a distant second and then it falls off a cliff. For defense tech, the cluster is Southern California (SpaceX, Anduril, El Segundo, Irvine). Fintech and crypto skew toward New York. The remote everywhere advice is, Elad says, just BS for anyone trying to break into an industry cluster.

    How Elad Got Into His Best Deals

    Stripe started with Elad cold emailing Patrick Collison after selling an API company to Twitter. A couple of walks later, Patrick texted that he was raising and Elad was in. Airbnb came from helping the founders raise their Series A and being asked at the end if he wanted to invest. Anduril came from noticing that Google had shut down Project Maven and asking if anyone was building defense tech, then meeting Trey Stephens at a Founders Fund lunch. Perplexity came from Aravind Srinivas cold messaging him on LinkedIn while still at OpenAI. Across all of these, the pattern is the same: be in the cluster, be helpful, be talking publicly about technology nobody else is talking about, and be useful to founders before any money is on the table.

    The One Belief Framework

    Investors love complicated 50 page memos. Elad believes the actual decision usually collapses into a single core belief. Coinbase: this is an index on crypto, and crypto will keep growing. Stripe: this is an index on e-commerce, and e-commerce will keep growing. Anduril: AI plus drones plus a cost plus model will be important for defense. If your thesis needs three things to be true, it is probably not going to work. If it needs nothing, you have no thesis.

    Boards as In-Laws

    Elad emphasizes that founders should treat board composition like one of the most important hiring decisions of the company. You cannot fire an investor board member. They have contractual rights. So if you are going to be stuck with someone for a decade, take a worse valuation for a better human. Reid Hoffman’s framing is that the best board member is a co-founder you could not have otherwise hired. Naval Ravikant’s framing is that valuation is temporary but control is forever. Elad recommends writing a job spec for every board seat.

    The Slop Age as a Golden Era

    One of the warmest takes in the blog post is the framing of the current moment as the Slop Age, and the suggestion that this might actually be the golden era of AI plus humanity. Before the last few years, AI was inaccessible and narrow. Eventually AI may become superhuman at most tasks. Today, AI produces useful slop at volume, which means humans are still needed to desloppify the slop, but the leverage on time and ambition is real. That makes the work fun. If AI displaces people or starts doing more interesting work, this golden moment fades. Elad also notes the obvious counter, that the era of human generated internet slop preceded the AI slop era. AGI may end the slop age, or alternately may be the thing that finally cleans up all the prior waves of human slop.

    Anti-AI Regulation and Violence Will Increase

    This is one of the more sobering threads in the blog post. Real world AI driven labor displacement has been small so far, but anti-AI sentiment is already strong and growing. Maine just banned new data centers. There has been actual violence directed at AI leaders, including a recent attack on Sam Altman. Elad’s view is that AI leaders should work harder on optimistic public framing, real political lobbying, and reining in the doom narrative coming from inside the field. Otherwise the regulatory and activist backlash will get much worse, and likely on the basis of mismeasured impacts.

    Right Now Consensus Is Correct

    The headline contrarian take from the episode is that contrarianism right now is wrong. There are moments in time when betting against the crowd pays. This is not one of them. The smart bet is just buying more AI exposure. Trying to find the clever angle, the underlooked hardware play, the secret macro thesis, is overthinking it. Save the contrarian moves for later in the cycle.

    Distribution Almost Always Matters

    Elad pushes back on the founder mythology that great products win on their own. Google paid hundreds of millions of dollars in the early 2000s to distribute its toolbar through every popular app installer on the internet. Facebook bought search ads against people’s own names in European markets to seed network liquidity. TikTok spent billions on user acquisition before its algorithm could lock people in. Snowflake spent enormous sums on enterprise sales and channel partnerships. Sometimes the best product wins. Often the company with the best distribution wins. Founders should plan for both.

    AI as a Cold Reader and a Research Partner

    Two of the more practical AI workflows Elad describes: First, uploading photos of founders to AI models with cold reading prompts that ask the model to identify micro features (crows feet from genuine smiling, brow patterns, posture cues) and infer personality traits, sense of humor, and likely social behavior. He reports the outputs are surprisingly specific. Second, running deep dives across multiple models in parallel (Claude, ChatGPT, Gemini), asking each for primary sources, summary tables, and cross checked data. He recently used this approach to investigate the rise in autism and ADHD diagnoses, concluding that diagnostic criteria shifts and school incentives drive most of it, and noting that maternal age has a stronger statistical association with autism than paternal age, despite paternal age getting all the public discourse.

    The First Ever 10 Year Plan

    For someone who has been compounding aggressively for two decades, Elad has somehow never written a 10 year plan until now. He knows it will not play out as written. The point is that the act of imagining a decade out shifts what you choose to do in the near term. He explicitly rejects the AGI in two years therefore plans are pointless framing as defeatist. There will be interesting things to do regardless of how the AGI timeline plays out.

    Thoughts

    This is one of the more useful AI investor conversations of 2026, mostly because Elad is willing to put numbers and timelines on things that are usually left vague. Pairing the podcast with the underlying Substack post is the right move because the post is where the GDP math, the closed loop framework, and the Slop Age framing actually live. The podcast is where Elad explains how he thinks rather than just what he thinks.

    The 12 to 18 month sell window framing is the most actionable single idea in either source, and probably the most uncomfortable for AI founders sitting on multi billion dollar paper valuations. The math is unforgiving. A dozen winners out of thousands. If you are honest with yourself about whether you are in the dozen, you know what to do.

    The Korean memory bottleneck framing explains a lot of current behavior. The talent wars make more sense once you accept that compute is not going to be the differentiator for two years, so people become the only remaining lever. The convergence of capabilities across OpenAI, Anthropic, Google, and xAI starts to look less like coincidence and more like the structural inevitability of a supply constrained input. The 2028 inflection date is the one to watch.

    Compute as currency is the cleanest reframing in the blog post. Once you start pricing companies in tokens rather than dollars, everything from Cursor’s economics to Allbirds raising a convert to build a GPU farm becomes legible. The interesting question is whether this is a permanent unit of denomination or a transitional one that fades when inference costs collapse.

    The software to labor argument is the structural framing that I think will hold up the longest. Once you internalize that we are not selling seats anymore but selling cognitive output, every vertical that was previously locked behind ugly procurement and IT inertia opens up. Harvey is the proof of concept. There will be 30 more Harveys across every white collar profession.

    The closed loop framework is the cleanest predictor of which jobs get hit hardest and soonest. If you want to know whether your role is exposed, the questions to ask are whether outputs can be machine evaluated, how tight the feedback loop is, and how high the economic value is. The intersection is where AI lands first.

    The geographic concentration data is genuinely shocking. 91 percent of global AI private market cap in a 10 by 10 mile area is the kind of statistic that should make everyone outside that square think very carefully about what game they are playing.

    The Slop Age framing is the most emotionally honest moment in the post. We are in a window where humans still meaningfully add value on top of AI output. That window is finite. Enjoy it.

    The anti-AI backlash thread is the one I think most people in the industry are still underweighting. Maine banning new data centers is a leading indicator, not a one off. The fact that the impacts are likely to be mismeasured by official statistics makes the political dynamics worse, not better. AI will get blamed for harms it did not cause and credited for none of the gains. If the field’s leaders do not start communicating better and lobbying smarter, the regulatory environment in 2028 will be much worse than in 2026.

    Finally, Elad’s first ever 10 year plan stands out as the most quietly important moment in the episode. The implicit message is that even people who have been compounding aggressively for two decades benefit from forcing a longer time horizon onto their thinking. Most plans fail. The act of planning still changes what you do today.

    Read the original Elad Gil post here: Random thoughts while gazing at the misty AI Frontier. Find Elad on X at @eladgil, on his Substack at blog.eladgil.com, and on his website at eladgil.com. Tim Ferriss publishes the full episode at tim.blog/podcast.

  • Nicolai Tangen on Managing the World’s Largest Sovereign Wealth Fund: Insights from The David Rubenstein Show

    Nicolai Tangen isn’t your typical financial titan. On February 20, 2025, he sat down with David Rubenstein on “The David Rubenstein Show: Peer-to-Peer Conversations,” filmed a month earlier at the Bloomberg House in Davos. As CEO of Norges Bank Investment Management, Tangen runs the world’s largest sovereign wealth fund—$1.8 trillion strong, dwarfing all others. The episode, already at 7,983 views on YouTube, pulls back the curtain on a guy who traded hedge fund glory for a shot at serving Norway. Here’s what he revealed.

    The fund, nicknamed the “Oil Fund,” owes its existence to a frigid night in 1969. Phillips Petroleum hit the jackpot on the Norwegian Shelf, striking the biggest offshore oil find ever at the time. Tangen recounted the moment: a 2 a.m. wake-up call to the Ocean Viking platform chief, followed by a Christmas Eve announcement that changed Norway forever. Started in 1996 with 2 billion Norwegian kroner, it’s now a 20-trillion-kroner behemoth, funding 20-25% of the country’s budget thanks to a strict 3% spending cap. Tangen’s job? Steer this giant, owning chunks of over 9,000 companies worldwide, through calm and chaos alike.

    His approach is steady, not sexy. “You want to be widely diversified,” he told Rubenstein. Tactical bets are a nightmare with a fund this size, so he preaches spreading the risk—across assets, across borders. He’s a contrarian at heart, eyeing beaten-down Chinese stocks while others chase U.S. tech. AI’s been a goldmine, with American tech giants padding the fund’s returns and his team boasting a 15% efficiency bump from new tools. But he’s not blind to today’s risks. With Trump in office, Tangen sees U.S. deregulation juicing short-term gains, offset by tariff pain for Europe and inflation threats from tight labor and big debt.

    Pressure’s a constant companion. The fund’s value ticks live on its website—13 updates a second—and Norway’s 5 million citizens watch closely. “There’s always something going wrong somewhere,” Tangen said, shrugging off the endless gripes about too much of this stock or too little of that. He’s applied for another five-year term, banking on his team’s track record and a push for transparency that’s made Norges the most open fund globally. ESG? Still a priority in Norway, despite America’s cooling on it. His worries keep him up at night: inflation spikes or a wild-card disaster—think Covid or a nuclear mess.

    Tangen’s path to this gig is a hell of a tale. Born in Kristiansand, he studied Russian in Norway’s intelligence service before landing at Wharton, where humility took a backseat to world-conquering bravado. He built AKO Capital into a $20 billion hedge fund powerhouse, then walked away, handing his stake to a charitable foundation and joining the Giving Pledge with a billion-plus net worth. “Happiness is about learning,” he said, rejecting the chase for more cash. “The person with the most money when they die has lost.” Now, he skis, picks wild mushrooms for chanterelle spaghetti, and dreams of another degree—maybe not art history, since he bombed that once.

    This isn’t just a finance story—it’s a human one. Tangen’s a rarity: a guy who’s crushed it in the cutthroat private sector, then pivoted to public service without losing his soul. The full interview’s on YouTube (catch it here), and it’s worth every minute. From oil rigs to AI, from Oslo to Davos, he’s proof you can manage a fortune and still keep your feet on the ground.