Recently, local AI assistants have exploded. Tools like OpenClaw now let anyone run powerful AI agents on their own hardware—no cloud subscription required. Many people still don’t understand what this actually means.
Some say big companies are panicking because everyone’s buying Mac minis to run AI themselves. This isn’t entirely true.
What big companies fear isn’t you buying that machine. It’s not even you canceling ChatGPT. What they really fear is this: the way compute power is consumed is changing from continuous payment to one-time ownership. — Read More
Tag Archives: Strategy
Why I don’t think AI is a bubble
Most of the people I like think AI is a bubble. This is a tricky topic to discuss, because the “bubble” framing couples financial and technical issues. It’s like a sports fan debating “Is this player overrated?“. The answer depends on how good you think that player is, and how good you think other people think they are.
I don’t have anything much to add to the financial part of the “AI bubble” conversation. Various equity prices are based on very optimistic estimates about how AI will progress. This post is about the technological question. I’ll leave it to you to judge what sort of forecast any given asset price actually represents.
The main case I want to make is that performance probably won’t plateau — or at least, the common arguments for why it will plateau don’t add up. — Read More
Why I don’t think AGI is imminent
The CEOs of OpenAI and Anthropic have both claimed that human-level AI is just around the corner — and at times, that it’s already here. These claims have generated enormous public attention. There has been some technical scrutiny of these claims, but critiques rarely reach the public discourse. This piece is a sketch of my own thinking about the boundary of transformer-based large language models and human-level cognition. I have an MS degree in Machine Learning from over a decade ago, and I don’t work in the field of AI currently, but I am well-read on the underlying research. If you know more than I do about these topics, please reach out and let me know, I would love to develop my thinking on this further. — Read More
#strategyThe “AI Kills SaaS” Take Is Lazy. Here’s What’s Actually Happening.
HubSpot’s revenue is up 19%.
Xero is up 23%.
Atlassian is up 23%.
Figma is growing at 40%.
Adobe added another 11% to hit $23.8 billion.
And every single one of their stock prices has been absolutely destroyed this year.
Here is HubSpot. Currently at $228 down from a 52 week high of $881
…So what’s going on? The popular take is simple: AI has arrived, SaaS is dead, pack it up. … I wanted to go deeper. — Read More
Something Big Is Happening
Think back to February 2020.
If you were paying close attention, you might have noticed a few people talking about a virus spreading overseas. But most of us weren’t paying close attention. The stock market was doing great, your kids were in school, you were going to restaurants and shaking hands and planning trips. If someone told you they were stockpiling toilet paper you would have thought they’d been spending too much time on a weird corner of the internet. Then, over the course of about three weeks, the entire world changed. Your office closed, your kids came home, and life rearranged itself into something you wouldn’t have believed if you’d described it to yourself a month earlier.
I think we’re in the “this seems overblown” phase of something much, much bigger than Covid.
I’ve spent six years building an AI startup and investing in the space. I live in this world. And I’m writing this for the people in my life who don’t… my family, my friends, the people I care about who keep asking me “so what’s the deal with AI?” and getting an answer that doesn’t do justice to what’s actually happening. I keep giving them the polite version. The cocktail-party version. Because the honest version sounds like I’ve lost my mind. And for a while, I told myself that was a good enough reason to keep what’s truly happening to myself. But the gap between what I’ve been saying and what is actually happening has gotten far too big. The people I care about deserve to hear what is coming, even if it sounds crazy.
… Most of us who work in AI are building on top of foundations we didn’t lay. We’re watching this unfold the same as you… we just happen to be close enough to feel the ground shake first.
But it’s time now. Not in an “eventually we should talk about this” way. In a “this is happening right now and I need you to understand it” way. – Read More
AI-Generated Text and the Detection Arms Race
In 2023, the science fiction literary magazine Clarkesworld
stopped accepting new submissions because so many were generated by artificial intelligence. Near as the editors could tell, many submitters pasted the magazine’s detailed story guidelines into an AI and sent in the results. And they weren’t alone. Other fiction magazines have also reported a high number of AI-generated submissions.
This is only one example of a ubiquitous trend. A legacy system relied on the difficulty of writing and cognition to limit volume. Generative AI overwhelms the system because the humans on the receiving end can’t keep up. — Read More
Ships Passing in the Night (OpenAI’s GPT-5.3/Anthropic’s Opus 4.6)
OpenAI just introduced a new model that unlocks even more of what Codex can do: GPT‑5.3-Codex, the most capable agentic coding model to date. The model advances both the frontier coding performance of GPT‑5.2-Codex and the reasoning and professional knowledge capabilities of GPT‑5.2, together in one model, which is also 25% faster. This enables it to take on long-running tasks that involve research, tool use, and complex execution. Much like a colleague, you can steer and interact with GPT‑5.3-Codex while it’s working, without losing context.
Meanwhile, Anthropic counter with the new Claude Opus 4.6 improves on its predecessor’s coding skills. It plans more carefully, sustains agentic tasks for longer, can operate more reliably in larger codebases, and has better code review and debugging skills to catch its own mistakes. And, in a first for our Opus-class models, Opus 4.6 features a 1M token context window in beta.
… Both companies are advancing beyond simple code completion. We’re now talking about AI agents that can tackle complex, multi-step projects with a new level of independence. They are evolving from assistants into collaborators and, in some cases, independent workers. — Read More
Enterprises Don’t Have an AI Problem. They Have an Architecture Problem
Over the last year, I keep hearing the same statements in meetings, reviews, and architecture forums:
“We’re doing AI.” “We have a chatbot now.” “We’ve deployed an agent.”
When I look a little closer, what most organizations really have is not enterprise AI. They have a tool.
Usually it is a chatbot, or a search assistant, or a workflow automation, or a RAG system. All of these are useful. I have built many of them myself. But none of these, by themselves, represent enterprise AI architecture.
AI is not a feature. AI is not a product.
AI is a new enterprise capability layer. And in large organizations, capability layers must be architected. — Read More
Taboola & Columbia University Research Shows GenAI Ads Perform Just as Well as Human-Made Content
While GenAI has revolutionised production speed and cost, its impact on actual performance has remained a subject of intense debate. The new study, titled “AI Ads That Work: How AI Creative Stacks Up Against Humans,” analysed hundreds of thousands of live ads running on Realize, Taboola’s performance advertising platform, totalling more than 500 million impressions and 3 million clicks. — Read More
The Duelling Rhetoric at the AI Frontier
At Davos 2026, Anthropic CEO Dario Amodei told a room full of the world’s most influential investors that AI would replace “most, maybe all” of what software engineers do within six to twelve months. A few hours later, Google DeepMind CEO Demis Hassabis took the same stage and said current AI systems are “nowhere near” human-level intelligence, and that we probably need “one or two more breakthroughs” before AGI arrives.
Both men run frontier AI labs. Both have access to roughly the same benchmarks, papers, and internal capabilities data. Yet their public forecasts diverge so dramatically that at least one of them must be either wrong or strategically misleading. The interesting question is which, and why. — Read More