2025: The year in LLMs

This is the third in my annual series reviewing everything that happened in the LLM space over the past 12 months. For previous years see Stuff we figured out about AI in 2023 and Things we learned about LLMs in 2024.

It’s been a year filled with a lot of different trends. — Read More

#strategy

How to Land a $500K AI PM Job at OpenAI (The 2026 Playbook)

… The talent shortage is brutal. Every company needs AI PMs. Few people have the skills.

OpenAI, Anthropic, Google DeepMind, and Meta all have open AI PM roles. They can’t fill them fast enough.

The hiring bar is high. You need product sense, technical depth, and hands-on AI experience. Most PMs have one or two. You need all three.

… The gap between supply and demand means comp packages keep climbing. Base salary plus equity plus signing bonuses. $500K is common. $700K+ for senior roles.

The AI PM job market dynamics show why this won’t change soon. — Read More

#strategy

AI Took My Friend’s Job — But Tripled His Salary 6 Months Later (Here’s What Nobody’s Telling You)

Last month, my college roommate Jake sent me a panicked text at 2 AM.

“Dude. ChatGPT just wrote better code than me in 30 seconds. Am I screwed?”

Jake’s a software engineer at a mid-sized tech company. Makes $140K. Has a mortgage. Two kids. He’d just spent three weeks on a feature that Claude finished in minutes.

I get it. The headlines are terrifying. Every week there’s a new story about AI “coming for your job.” Anthropic’s CEO warned that AI could replace half of all entry-level office jobs within five years. Goldman Sachs economists predict 6–7% of the US workforce could be displaced.

But here’s what nobody’s talking about: I just spent 40 hours analyzing over 2 billion job postings, academic studies, and labor market data from 2022–2025.

The truth? It’s the exact opposite of what you think. — Read More

#strategy

The Shape of Artificial Intelligence

The shape of things only becomes legible at a distance. For instance, history demands temporal distance.

… Although AI is nearing its 70th birthday, it’s been only five years since ChatGPT was launched, eight since the transformer paper was published, and thirteen since AlexNet’s victory on the ImageNet challenge, which implies the deep learning revolution is barely a wayward teenager. I think, however, that we must try to give a clearer shape to the current manifestation of AI (chatbots, large language models, etc.). We are the earliest historians of this weird, elusive technology, and as such, it’s our duty to begin a conversation that’s likely to take decades (or centuries, if we remain alive by then) to be fully fleshed out, once spatial and temporal distance reveal what we’re looking at. — Read More

#strategy

The changing drivers of LLM adoption

In the world of AI, half a year is a very long time. Back in July, we saw LLMs being adopted faster than almost any other technology in history. Five months later we’re still seeing rapid growth, but we’re also seeing early winds of change — both in who uses AI and how they do so.

Using the latest public data,1 and a poll of US adults we conducted with Blue Rose Research, this post shares an updated picture of the state of LLM adoption. — Read More

#strategy

AI agents are starting to eat SaaS

We spent fifteen years watching software eat the world. Entire industries got swallowed by software – retail, media, finance – you name it, there has been incredible disruption over the past couple of decades with a proliferation of SaaS tooling. This has led to a huge swath of SaaS companies – valued, collectively, in the trillions.

In my last post debating if the cost of software has dropped 90% with AI coding agents I mainly looked at the supply side of the market. What will happen to demand for SaaS tooling if this hypothesis plays out? I’ve been thinking a lot about these second and third order effects of the changes in software engineering.

The calculus on build vs buy is starting to change. Software ate the world. Agents are going to eat SaaS. — Read More

#strategy

Economics of Orbital vs Terrestrial Data Centers

Before we get nerd sniped by the shiny engineering details, ask the only question that matters. Why compute in orbit? Why should a watt or a flop 250 miles up be more valuable than one on the surface? What advantage justifies moving something as mundane as matrix multiplication into LEO?

That “why” is almost missing from the public conversation. People jump straight to hardware and hand-wave the business case, as if the economics are self-evident. They aren’t. A lot of the energy here is FOMO and aesthetic futurism, not a grounded value proposition.

… This is all to say that the current discourse is increasingly bothering me due to the lack of rigor. — Read More

#strategy

Is It a Bubble?

Ours is a remarkable moment in world history. A transformative technology is ascending, and its supporters claim it will forever change the world. To build it requires companies to invest a sum of money unlike anything in living memory. News reports are filled with widespread fears that America’s biggest corporations are propping up a bubble that will soon pop.

… One of the most interesting aspects of bubbles is their regularity, not in terms of timing, but rather the progression they follow. Something new and seemingly revolutionary appears and worms its way into people’s minds. It captures their imagination, and the excitement is overwhelming. The early participants enjoy huge gains. Those who merely look on feel incredible envy and regret and – motivated by the fear of continuing to miss out – pile in. They do this without knowledge of what the future will bring or concern about whether the price they’re paying can possibly be expected to produce a reasonable return with a tolerable amount of risk. The end result for investors is inevitably painful in the short to medium term, although it’s possible to end up ahead after enough years have passed.

… I took the quote that opens this memo from Derek Thompson’s November 4 newsletter entitled “AI Could Be the Railroad of the 21st Century. Brace Yourself,” about parallels between what’s going on today in AI and the railroad boom of the 1860s. Its word-for-word applicability to both shows clearly what’s meant by the phrase widely attributed to Mark Twain: “history rhymes.” — Read More

#strategy

Why AGI Will Not Happen

If you are reading this, you probably have strong opinions about AGI, superintelligence, and the future of AI. Maybe you believe we are on the cusp of a transformative breakthrough. Maybe you are skeptical. This blog post is for those who want to think more carefully about these claims and examine them from a perspective that is often missing in the current discourse: the physical reality of computation.

I have been thinking about this topic for a while now, and what prompted me to finally write this down was a combination of things: a Twitter thread, conversations with friends, and a growing awareness that the thinking around AGI and superintelligence is not just optimistic, but fundamentally flawed. The purpose of this blog post is to address what I see as very sloppy thinking, thinking that is created in an echo chamber, particularly in the Bay Area, where the same ideas amplify themselves without critical awareness. This amplification of bad ideas and thinking exhuded by the rationalist and EA movements, is a big problem in shaping a beneficial future for everyone. Realistic thought can be used to ground where we are and where we have to go to shape a future that is good for everyone.

I want to talk about hardware improvements, AGI, superintelligence, scaling laws, the AI bubble, and related topics. But before we dive into these specific areas, I need to establish a foundation that is often overlooked in these discussions. Let me start with the most fundamental principle. — Read More

#strategy

State of AI

The past year has marked a turning point in the evolution and real-world use of large language models (LLMs). With the release of the first widely adopted reasoning model, o1, on December 5th, 2024, the field shifted from single-pass pattern generation to multi-step deliberation inference, accelerating deployment, experimentation, and new classes of applications. As this shift unfolded at a rapid pace, our empirical understanding of how these models have actually been used in practice has lagged behind. In this work, we leverage the OpenRouter platform, which is an AI inference provider across a wide variety of LLMs, to analyze over 100 trillion tokens of real-world LLM interactions across tasks, geographies, and time. In our empirical study, we observe substantial adoption of open-weight models, the outsized popularity of creative roleplay (beyond just the productivity tasks many assume dominate) and coding assistance categories, plus the rise of agentic inference. Furthermore, our retention analysis identifies foundational cohorts: early users whose engagement persists far longer than later cohorts. We term this phenomenon the Cinderella “Glass Slipper” effect. These findings underscore that the way developers and end-users engage with LLMs “in the wild” is complex and multifaceted. We discuss implications for model builders, AI developers, and infrastructure providers, and outline how a data-driven understanding of usage can inform better design and deployment of LLM systems. — Read More

#strategy