In 1950, when computing was little more than automated arithmetic and simple logic, Alan Turing asked a question that still reverberates today: can machines think? It took remarkable imagination to see what he saw: that intelligence might someday be built rather than born. That insight later launched a relentless scientific quest called Artificial Intelligence (AI). Twenty-five years into my own career in AI, I still find myself inspired by Turing’s vision. But how close are we? The answer isn’t simple.
Today, leading AI technology such as large language models (LLMs) have begun to transform how we access and work with abstract knowledge. Yet they remain wordsmiths in the dark; eloquent but inexperienced, knowledgeable but ungrounded. Spatial intelligence will transform how we create and interact with real and virtual worlds—revolutionizing storytelling, creativity, robotics, scientific discovery, and beyond. This is AI’s next frontier. — Read More
Tag Archives: Strategy
The Great Decoupling of Labor and Capital
Almost two decades ago, Hewlett-Packard (HP) was the first tech company to exceed $100 Billion annual revenue threshold in 2007. At that time, HP had 172k employees. The very next year, IBM joined the club, but IBM had almost 400k employees.
Today’s megacap tech companies all exhibit a common characteristics: their growth is pretty much decoupled from their headcount. Intuitively, this might not be a news to anyone, but when I sat down and carefully jotted the numbers, the extent of the decoupling even before Generative AI truly came to the scene was a bit astonishing to me. — Read More
Remote Labor Index:Measuring AI Automation of Remote Work
AIs have made rapid progress on research-oriented benchmarks of knowledge and reasoning, but it remains unclear how these gains translate into economic value and automation. To measure this, we introduce the Remote Labor Index (RLI), a broadly multi-sector benchmark comprising real-world, economically valuable projects designed to evaluate end-to-end agent performance in practical settings. AI agents perform near the floor on RLI, with the highest-performing agent achieving an automation rate of 2.5%. These results help ground discussions of AI automation in empirical evidence, setting a common basis for tracking AI impacts and enabling stakeholders to proactively navigate AI-driven labor automation. — Read More
#strategyThe End of Cloud Inference
Most people picture the future of AI the same way they picture the internet: somewhere far away, inside giant buildings full of humming machines. Your phone or laptop sends a request, a distant data center does the thinking, and the answer streams back. That story has been useful for getting AI off the ground, but it’s not how it ends. For a lot of everyday tasks, the smartest place to run AI will be where the data already lives: on your device.
We already accept this in another computationally demanding field: graphics. No one renders every frame of a video game in a warehouse and streams the pixels to your screen. Your device does the heavy lifting locally because it’s faster, cheaper, and more responsive. AI is heading the same way. The cloud won’t disappear, but it will increasingly act like a helpful backup or “bigger battery,” not the default engine for everything. — Read More
AI Broke Interviews
Interviewing has always been a big can of worms in the software industry. For years, big tech has gone with the LeetCode style questions mixed with a few behavioural and system design rounds. Before that, it was brainteasers. I still remember how you would move Mount Fuji era. Tech has never really had good interviewing, but the question remains: how do you actually evaluate someone’s ability to reason about data structures and algorithms without asking algorithmic questions? I don’t know. People say engineers don’t need DSA. Perhaps. Nonetheless, it’s still taught in CS programs for a reason.
In real work, I’ve used data structure-y thinking maybe a handful of times, but I had more time to think. Literally days. Companies, on the other hand, need to make fast decisions. It’s not perfect. It was never perfect. But it worked well enough. Well, at least, I thought it did.
And then AI detonated the whole thing. Sure, people could technically cheat before. You could have friends feeding you hints or solving the problem with you but even that was limited. You needed friends who could code. And if your friend was good enough to help you, chances are you weren’t completely incompetent yourself. Cheating still filtered for a certain baseline. AI destroyed that filtration layer.
Everyone now has access to perfect code, perfect explanations, perfect system design diagrams, and even perfect behavioural answers. — Read More
Wharton AI Study: Gen AI Fast-Tracks into the Enterprise
Three years ago, in the wake of ChatGPT’s debut, we launched our initial study to push past the headlines —asking business leaders how they were actually using Gen AI and soliciting their expectations around thetechnology’s future applications in their businesses.
As Gen AI fast-tracks into budgets, processes, and training, executives need benchmarks, not anecdotes. Our unique, year-over-year, repeated cross-sectional lens now shows where the common use cases are, where returns are emerging, and which people-and-process levers could convert mainstream use into durable ROI. We will track these shifts each year in this ongoing research initiative. — Read More
The Faustian bargain of AI
Christopher Marlowe, the author of Doctor Faustus, never lived to see his play published or performed. He was murdered after a dispute over a bill spiraled out of control, ending in a fatal stab to the eye. He would never see how Doctor Faustus would live on for centuries. In a modern sense, we too may never fully grasp the long-term consequences of today’s technologies—AI included, for better or worse.
This social contract we are signing between artificial intelligence and the human race is changing life rapidly. And while we can guess where it takes us, we aren’t entirely sure. Instead, we can look to the past to find truth.
Take the COVID-19 pandemic, for example. As rumors of shutdowns began to swirl, I found myself reaching for Albert Camus’ The Plague. Long before we knew what was coming, Camus had already offered a striking portrait of the very world we were about to enter: lockdowns, political division, and uncertain governments.
Just as Camus captured the psychological toll of unseen forces, Marlowe explores the cost of human ambition when we seek control over the uncontrollable. Why did Faustus give up his soul? Why did he cling to his pact, even as doubt crept in? And what did he gain, briefly, in exchange for something eternal?
In Marlowe’s tragedy, we find a reflection of our own choices as we navigate the promises and perils of AI. — Read More
The AGI race is an all‑pay auction. That’s why “over‑investment” is rational.
When the prize is “winner‑takes‑all” and everyone must pay their costs whether they win or lose, you don’t get measured competition—you get value (rent) dissipation [3]. That is what contest theory calls an all‑pay auction [0]. In expectation, participants spend roughly the entire value of the prize in aggregate trying to win it [1][2]. What happens when the perceived value of the prize is nearly infinite?
For AGI—where the imagined prize is monopoly‑like profits across software, science, society, the next industrial revolution, the whole fabric of human civilization—equilibrium spending is enormous by construction. In this worldview, the seemingly excessive capital allocation is rational: if you cut spending while rivals do not, you lose the race and everything you’ve already invested. Google co‑founder Larry Page has allegedly asserted (as relayed by investor Gavin Baker): “I am willing to go bankrupt rather than lose this race” [4]. — Read More
New physical attacks are quickly diluting secure enclave defenses from Nvidia, AMD, and Intel
Trusted execution environments, or TEEs, are everywhere—in blockchain architectures, virtually every cloud service, and computing involving AI, finance, and defense contractors. It’s hard to overstate the reliance that entire industries have on three TEEs in particular: Confidential Compute from Nvidia, SEV-SNP from AMD, and SGX and TDX from Intel. All three come with assurances that confidential data and sensitive computing can’t be viewed or altered, even if a server has suffered a complete compromise of the operating kernel.
A trio of novel physical attacks raises new questions about the true security offered by these TEES and the exaggerated promises and misconceptions coming from the big and small players using them.
The most recent attack, released Tuesday, is known as TEE.fail. It defeats the latest TEE protections from all three chipmakers. The low-cost, low-complexity attack works by placing a small piece of hardware between a single physical memory chip and the motherboard slot it plugs into. It also requires the attacker to compromise the operating system kernel. Once this three-minute attack is completed, Confidential Compute, SEV-SNP, and TDX/SDX can no longer be trusted. Unlike the Battering RAM and Wiretap attacks from last month—which worked only against CPUs using DDR4 memory—TEE.fail works against DDR5, allowing them to work against the latest TEEs. — Read More
Through the Looking Glass: Stephen Klein’s Quest to Make AI Think Before It Speaks
#strategy“Agentic AI is 100% Non-Sense Designed To Scare You Into Spending Money on Consulting.”
That was the hook of a LinkedIn post designed to ruffle feathers in the AI world. It was bold, direct, and very Stephen Klein.
… While most of Silicon Valley is busy building AI companies designed to automate and replace jobs, all in the pursuit of profit, Stephen is purposely, loudly, going against the grain.
He is the founder of Curiouser.ai, a startup building the world’s first strategic AI coach, Alice, designed not to answer your questions, but to ask them.
And not just any questions, thought-provoking, Socratic, destabilizing questions.
Welcome to Alice in Wonderland. And Stephen, like a modern-day Lewis Carroll, is inviting us to question everything. — Read More