AI agents are starting to eat SaaS

We spent fifteen years watching software eat the world. Entire industries got swallowed by software – retail, media, finance – you name it, there has been incredible disruption over the past couple of decades with a proliferation of SaaS tooling. This has led to a huge swath of SaaS companies – valued, collectively, in the trillions.

In my last post debating if the cost of software has dropped 90% with AI coding agents I mainly looked at the supply side of the market. What will happen to demand for SaaS tooling if this hypothesis plays out? I’ve been thinking a lot about these second and third order effects of the changes in software engineering.

The calculus on build vs buy is starting to change. Software ate the world. Agents are going to eat SaaS. — Read More

#strategy

Economics of Orbital vs Terrestrial Data Centers

Before we get nerd sniped by the shiny engineering details, ask the only question that matters. Why compute in orbit? Why should a watt or a flop 250 miles up be more valuable than one on the surface? What advantage justifies moving something as mundane as matrix multiplication into LEO?

That “why” is almost missing from the public conversation. People jump straight to hardware and hand-wave the business case, as if the economics are self-evident. They aren’t. A lot of the energy here is FOMO and aesthetic futurism, not a grounded value proposition.

… This is all to say that the current discourse is increasingly bothering me due to the lack of rigor. — Read More

#strategy

Is It a Bubble?

Ours is a remarkable moment in world history. A transformative technology is ascending, and its supporters claim it will forever change the world. To build it requires companies to invest a sum of money unlike anything in living memory. News reports are filled with widespread fears that America’s biggest corporations are propping up a bubble that will soon pop.

… One of the most interesting aspects of bubbles is their regularity, not in terms of timing, but rather the progression they follow. Something new and seemingly revolutionary appears and worms its way into people’s minds. It captures their imagination, and the excitement is overwhelming. The early participants enjoy huge gains. Those who merely look on feel incredible envy and regret and – motivated by the fear of continuing to miss out – pile in. They do this without knowledge of what the future will bring or concern about whether the price they’re paying can possibly be expected to produce a reasonable return with a tolerable amount of risk. The end result for investors is inevitably painful in the short to medium term, although it’s possible to end up ahead after enough years have passed.

… I took the quote that opens this memo from Derek Thompson’s November 4 newsletter entitled “AI Could Be the Railroad of the 21st Century. Brace Yourself,” about parallels between what’s going on today in AI and the railroad boom of the 1860s. Its word-for-word applicability to both shows clearly what’s meant by the phrase widely attributed to Mark Twain: “history rhymes.” — Read More

#strategy

Why AGI Will Not Happen

If you are reading this, you probably have strong opinions about AGI, superintelligence, and the future of AI. Maybe you believe we are on the cusp of a transformative breakthrough. Maybe you are skeptical. This blog post is for those who want to think more carefully about these claims and examine them from a perspective that is often missing in the current discourse: the physical reality of computation.

I have been thinking about this topic for a while now, and what prompted me to finally write this down was a combination of things: a Twitter thread, conversations with friends, and a growing awareness that the thinking around AGI and superintelligence is not just optimistic, but fundamentally flawed. The purpose of this blog post is to address what I see as very sloppy thinking, thinking that is created in an echo chamber, particularly in the Bay Area, where the same ideas amplify themselves without critical awareness. This amplification of bad ideas and thinking exhuded by the rationalist and EA movements, is a big problem in shaping a beneficial future for everyone. Realistic thought can be used to ground where we are and where we have to go to shape a future that is good for everyone.

I want to talk about hardware improvements, AGI, superintelligence, scaling laws, the AI bubble, and related topics. But before we dive into these specific areas, I need to establish a foundation that is often overlooked in these discussions. Let me start with the most fundamental principle. — Read More

#strategy

State of AI

The past year has marked a turning point in the evolution and real-world use of large language models (LLMs). With the release of the first widely adopted reasoning model, o1, on December 5th, 2024, the field shifted from single-pass pattern generation to multi-step deliberation inference, accelerating deployment, experimentation, and new classes of applications. As this shift unfolded at a rapid pace, our empirical understanding of how these models have actually been used in practice has lagged behind. In this work, we leverage the OpenRouter platform, which is an AI inference provider across a wide variety of LLMs, to analyze over 100 trillion tokens of real-world LLM interactions across tasks, geographies, and time. In our empirical study, we observe substantial adoption of open-weight models, the outsized popularity of creative roleplay (beyond just the productivity tasks many assume dominate) and coding assistance categories, plus the rise of agentic inference. Furthermore, our retention analysis identifies foundational cohorts: early users whose engagement persists far longer than later cohorts. We term this phenomenon the Cinderella “Glass Slipper” effect. These findings underscore that the way developers and end-users engage with LLMs “in the wild” is complex and multifaceted. We discuss implications for model builders, AI developers, and infrastructure providers, and outline how a data-driven understanding of usage can inform better design and deployment of LLM systems. — Read More

#strategy

Move over, computer science. Students are flocking to new AI majors

Artificial intelligence is the hot new college major.

This semester, more than 3,000 students enrolled in a new college of artificial intelligence and cybersecurity at the University of South Florida in Tampa.

At the University of California, San Diego, 150 first-year students signed up for a new AI major. And the State University of New York at Buffalo created a stand-alone “department of AI and society,” which is offering new interdisciplinary degrees in fields like “AI and policy analysis.”

The fast popularisation of products such as ChatGPT, along with skyrocketing valuations of tech giants such as chipmaker Nvidia, is helping to drive the campus AI boom. — Read More

#strategy

Technical Deflation

In economics, deflation is the opposite of inflation—it’s what we call it when prices go down instead of up. It is generally considered harmful: both because it is usually brought on by something really bad (like a severe economic contraction), and because in and of itself, it has knock-on effects on consumer behavior that can lead to a death spiral. One of the main problems is that if people expect prices to keep going down, they’ll delay purchases and save more, because they expect that they’ll be able to get the stuff for less later. Less spending means less demand means less revenue means fewer jobs which means less spending and then whoops you’re in a deflationary spiral.

… This isn’t really an economics blog post, though. I’m thinking about deflation because it parallels a recent pattern I’m seeing in startups. (So I guess you could call it a micro-economics blog post?) The basic mechanism is: (1) it’s easier and cheaper to build software now than ever before; (2) it seems like it probably will keep getting easier and cheaper for the forseeable future; so (3) why bother building anything now, just build it later when it’s cheaper and easier. — Read More

#strategy

Implications of AI to Schools

… You will never be able to detect the use of AI in homework. Full stop. All “detectors” of AI imo don’t really work, can be defeated in various ways, and are in principle doomed to fail. You have to assume that any work done outside classroom has used AI.

…[T]he goal is that the students are proficient in the use of AI, but can also exist without it, and imo the only way to get there is to flip classes around and move the majority of testing to in class settings. — Read More

#strategy

The Iceberg Index: Measuring Workforce Exposure Across the AI Economy

Artificial Intelligence is reshaping America’s $9.4 trillion labor market, with cascading effects that extend far beyond visible technology sectors. When AI transforms quality control tasks in automotive plants, consequences spread through logistics networks, supply chains, and local service economies. Yet traditional workforce metrics cannot capture these ripple effects: they measure employment outcomes after disruption occurs, not where AI capabilities overlap with human skills before adoption crystallizes. Project Iceberg addresses this gap using Large Population Models to simulate the human-AI labor market, representing 151 million workers as autonomous agents executing over 32,000 skills and interacting with thousands of AI tools. It introduces the Iceberg Index, a skills-centered metric that measures the wage value of skills AI systems can perform within each occupation. The Index captures technical exposure, where AI can perform occupational tasks, not displacement outcomes or adoption timelines. Analysis shows that visible AI adoption concentrated in computing and technology (2.2% of wage value, approx $211 billion) represents only the tip of the iceberg. Technical capability extends far below the surface through cognitive automation spanning administrative, financial, and professional services (11.7%, approx $1.2 trillion). This exposure is fivefold larger and geographically distributed across all states rather than confined to coastal hubs. Traditional indicators such as GDP, income, and unemployment explain less than 5% of this skills-based variation, underscoring why new indices are needed to capture exposure in the AI economy. By simulating how these capabilities may spread under scenarios, Iceberg enables policymakers and business leaders to identify exposure hotspots, prioritize investments, and test interventions before committing billions to implementation. — Read More

#strategy

Ilya Sutskever: AI’s bottleneck is ideas, not compute

Ilya Sutskever, in a rare interview with Dwarkesh Patel, laid out his sharp critique of the AI industry. He argues that reliance on brute-force “scaling” has hit a wall. While AI models may be brilliant on tests, they are fragile in terms of real-world applications. He believes the pursuit of general intelligence must now shift from simply gathering more data to discovering a new, more efficient scientific principles. — Read More

#strategy