The Piss Average Problem

The Age of AI is a Crisis of Faith

The fundamental question facing online spaces in 2025 is no longer can AI pass as human? but rather can humans prove they’re not AI?

This represents a profound shift from technical doubt to existential uncertainty. It’s a crisis of faith where the bedrock assumption that we interact with other humans online has collapsed. And I’m not being hyperbolic. In 2024, bot traffic exceeded human traffic for the first time in a decade, hitting 51%. We’ve crossed the threshold. The internet is now majority non-human.

When I personally veer onto the Internet, particularly places like LinkedIn or Substack or any social media’s comment section, Dead Internet Theory truly shines as a valid hypothesis. This once-fringe conspiracy theory which speculates that the Internet is now mostly bots talking to bots is now many people’s lived experience — Read More

#strategy

Android Dreams

“The danger is never that robots disobey, but that they obey perfectly.”

At the convergence of frontier research breakthroughs, billions in capital, and rising geopolitical tensions lies a dream for a new physical world. After the LLM wave, robotics is seen as the next exponential growth domain.0Chinese manufacturing is viewed as an existential threat to the US, adding to incentives. And, though robotics is the hardest domain of AI1, multiple new AI strategies now offer clear paths to Embodied General Intelligence (EGI).2

Informed by conversations with frontier researchers, intuitions gained at Optimus and Dyna2.5, and my own syntheses, I predict inference-controlled robots will comprise half the world’s GDP by 2045. This scenario illustrates how. — Read More

#strategy

#robotics

Common Ground between AI 2027 & AI as Normal Technology

AI 2027 and AI as Normal Technology were both published in April of this year. Both were read much more widely than we, their authors, expected.

Some of us (Eli, Thomas, Daniel, the authors of AI 2027) expect AI to radically transform the world within the next decade, up to and including such sci-fi-sounding possibilities as superintelligence, nanofactories, and Dyson swarms. Progress will be continuous, but it will accelerate rapidly around the time that AIs automate AI research.

Others (Sayash and Arvind, the authors of AI as Normal Technology) think that the effects of AI will be much more, well, normal. Yes, we can expect economic growth, but it will be the gradual, year-on-year improvement that accompanied technological innovations like electricity or the internet, not a radical break in the arc of human history.

These are substantial disagreements, which have been partially hashed out here and here.

Nevertheless, we’ve found that all of us have more in common than you might expect. — Read More

#strategy

From Words to Worlds: Spatial Intelligence is AI’s Next Frontier

In 1950, when computing was little more than automated arithmetic and simple logic, Alan Turing asked a question that still reverberates today: can machines think? It took remarkable imagination to see what he saw: that intelligence might someday be built rather than born. That insight later launched a relentless scientific quest called Artificial Intelligence (AI). Twenty-five years into my own career in AI, I still find myself inspired by Turing’s vision. But how close are we? The answer isn’t simple.

Today, leading AI technology such as large language models (LLMs) have begun to transform how we access and work with abstract knowledge. Yet they remain wordsmiths in the dark; eloquent but inexperienced, knowledgeable but ungrounded. Spatial intelligence will transform how we create and interact with real and virtual worlds—revolutionizing storytelling, creativity, robotics, scientific discovery, and beyond. This is AI’s next frontier.Read More

#strategy

The Great Decoupling of Labor and Capital

Almost two decades ago, Hewlett-Packard (HP) was the first tech company to exceed $100 Billion annual revenue threshold in 2007. At that time, HP had 172k employees. The very next year, IBM joined the club, but IBM had almost 400k employees.

Today’s megacap tech companies all exhibit a common characteristics: their growth is pretty much decoupled from their headcount. Intuitively, this might not be a news to anyone, but when I sat down and carefully jotted the numbers, the extent of the decoupling even before Generative AI truly came to the scene was a bit astonishing to me. — Read More

#strategy

Remote Labor Index:Measuring AI Automation of Remote Work

AIs have made rapid progress on research-oriented benchmarks of knowledge and reasoning, but it remains unclear how these gains translate into economic value and automation. To measure this, we introduce the Remote Labor Index (RLI), a broadly multi-sector benchmark comprising real-world, economically valuable projects designed to evaluate end-to-end agent performance in practical settings. AI agents perform near the floor on RLI, with the highest-performing agent achieving an automation rate of 2.5%. These results help ground discussions of AI automation in empirical evidence, setting a common basis for tracking AI impacts and enabling stakeholders to proactively navigate AI-driven labor automation. — Read More

#strategy

The End of Cloud Inference

Most people picture the future of AI the same way they picture the internet: somewhere far away, inside giant buildings full of humming machines. Your phone or laptop sends a request, a distant data center does the thinking, and the answer streams back. That story has been useful for getting AI off the ground, but it’s not how it ends. For a lot of everyday tasks, the smartest place to run AI will be where the data already lives: on your device.

We already accept this in another computationally demanding field: graphics. No one renders every frame of a video game in a warehouse and streams the pixels to your screen. Your device does the heavy lifting locally because it’s faster, cheaper, and more responsive. AI is heading the same way. The cloud won’t disappear, but it will increasingly act like a helpful backup or “bigger battery,” not the default engine for everything. — Read More

#strategy

AI Broke Interviews

Interviewing has always been a big can of worms in the software industry. For years, big tech has gone with the LeetCode style questions mixed with a few behavioural and system design rounds. Before that, it was brainteasers. I still remember how you would move Mount Fuji era. Tech has never really had good interviewing, but the question remains: how do you actually evaluate someone’s ability to reason about data structures and algorithms without asking algorithmic questions? I don’t know. People say engineers don’t need DSA. Perhaps. Nonetheless, it’s still taught in CS programs for a reason.

In real work, I’ve used data structure-y thinking maybe a handful of times, but I had more time to think. Literally days. Companies, on the other hand, need to make fast decisions. It’s not perfect. It was never perfect. But it worked well enough. Well, at least, I thought it did.

And then AI detonated the whole thing. Sure, people could technically cheat before. You could have friends feeding you hints or solving the problem with you but even that was limited. You needed friends who could code. And if your friend was good enough to help you, chances are you weren’t completely incompetent yourself. Cheating still filtered for a certain baseline. AI destroyed that filtration layer.

Everyone now has access to perfect code, perfect explanations, perfect system design diagrams, and even perfect behavioural answers. — Read More

#strategy

Wharton AI Study: Gen AI Fast-Tracks into the Enterprise

Three years ago, in the wake of ChatGPT’s debut, we launched our initial study to push past the headlines —asking business leaders how they were actually using Gen AI and soliciting their expectations around thetechnology’s future applications in their businesses.

As Gen AI fast-tracks into budgets, processes, and training, executives need benchmarks, not anecdotes. Our unique, year-over-year, repeated cross-sectional lens now shows where the common use cases are, where returns are emerging, and which people-and-process levers could convert mainstream use into durable ROI. We will track these shifts each year in this ongoing research initiative. — Read More

#strategy

The Faustian bargain of AI

Christopher Marlowe, the author of Doctor Faustus, never lived to see his play published or performed. He was murdered after a dispute over a bill spiraled out of control, ending in a fatal stab to the eye. He would never see how Doctor Faustus would live on for centuries. In a modern sense, we too may never fully grasp the long-term consequences of today’s technologies—AI included, for better or worse.

This social contract we are signing between artificial intelligence and the human race is changing life rapidly. And while we can guess where it takes us, we aren’t entirely sure. Instead, we can look to the past to find truth.

Take the COVID-19 pandemic, for example. As rumors of shutdowns began to swirl, I found myself reaching for Albert Camus’ The Plague. Long before we knew what was coming, Camus had already offered a striking portrait of the very world we were about to enter: lockdowns, political division, and uncertain governments.

Just as Camus captured the psychological toll of unseen forces, Marlowe explores the cost of human ambition when we seek control over the uncontrollable. Why did Faustus give up his soul? Why did he cling to his pact, even as doubt crept in? And what did he gain, briefly, in exchange for something eternal?

In Marlowe’s tragedy, we find a reflection of our own choices as we navigate the promises and perils of AI. — Read More

#strategy