Why I don’t think AI is a bubble

Most of the people I like think AI is a bubble. This is a tricky topic to discuss, because the “bubble” framing couples financial and technical issues. It’s like a sports fan debating “Is this player overrated?“. The answer depends on how good you think that player is, and how good you think other people think they are.

I don’t have anything much to add to the financial part of the “AI bubble” conversation. Various equity prices are based on very optimistic estimates about how AI will progress. This post is about the technological question. I’ll leave it to you to judge what sort of forecast any given asset price actually represents.

The main case I want to make is that performance probably won’t plateau — or at least, the common arguments for why it will plateau don’t add up.  — Read More

#strategy

BCIs in 2026: Still Janky, Still Dangerous, Still Overhyped

Alright, another year, another batch of venture capital pouring into ‘mind-reading’ startups that promise to turn your thoughts into Twitter threads. Frankly, it’s exhausting. We’re in 2026, and the fundamental problems that plagued Brain-Computer Interfaces (BCIs) a decade ago are still here, just wearing slightly shinier packaging. If you think we’re anywhere near seamless neural integration that lets you control a prosthetic arm with the fluidity of a natural limb, or hell, even reliably type at 60 WPM purely by thinking, you’ve been mainlining too much techbro hype. Let’s pull back the curtain on this circus, shall we? Because from an engineering perspective, most of what you hear is, generously, aspirational fiction. — Read More

#human

The Promptware Kill Chain

Attacks against modern generative artificial intelligence (AI) large language models (LLMs) pose a real threat. Yet discussions around these attacks and their potential defenses are dangerously myopic. The dominant narrative focuses on “prompt injection,” a set of techniques to embed instructions into inputs to LLM intended to perform malicious activity. This term suggests a simple, singular vulnerability. This framing obscures a more complex and dangerous reality. Attacks on LLM-based systems have evolved into a distinct class of malware execution mechanisms, which we term “promptware.” In a new paper, we, the authors, propose a structured seven-step “promptware kill chain” to provide policymakers and security practitioners with the necessary vocabulary and framework to address the escalating AI threat landscape. — Read More

#cyber

Why I don’t think AGI is imminent

The CEOs of OpenAI and Anthropic have both claimed that human-level AI is just around the corner — and at times, that it’s already here. These claims have generated enormous public attention. There has been some technical scrutiny of these claims, but critiques rarely reach the public discourse. This piece is a sketch of my own thinking about the boundary of transformer-based large language models and human-level cognition. I have an MS degree in Machine Learning from over a decade ago, and I don’t work in the field of AI currently, but I am well-read on the underlying research. If you know more than I do about these topics, please reach out and let me know, I would love to develop my thinking on this further. — Read More

#strategy