Phase one: most of 2023. You had to be technical. The models were there but they hallucinated constantly. You needed to be deeply technical to get anything useful out of a raw LLM API. Most of us — myself included — weren’t equipped. I remember being at SaaStr Annual 2023, talking with David Sacks, asking how he was thinking about AI at Craft. He said they wanted 80% of investments to be AI. I asked to see the great ones already in market. His answer: they’re all proof of concepts. We’re all in anyway. That was the right call if you were investing at the LLM layer. I wasn’t smart enough to play there, let alone deploy AI B2B agents then.
Phase two: 2024 into early 2025: the weird prompt engineer era. You could torture these tools into doing something useful, but you had to craft these elaborate, convoluted prompts that made no sense to anyone else. “Prompt engineer” became the hottest job on the planet for about a year. That job is now dead.
Phase three — which is right now — is the era where ordinarily smart generalists can make AI agents and AI tools do genuinely magical and useful things. No contorted prompts. No engineering degree. Just software deployment skills you probably already have. Some of it is the profound leap forward of Opus 4.5+. Some of it is the agentic products themselves just have gotten better. It’s both. It’s now. — Read More
Tag Archives: Strategy
Japan’s Team Mirai Uses Tech to Bolster Democracy, Not Undermine It
Japan’s election last month and the rise of the country’s newest and most innovative political party, Team Mirai, illustrates the viability of a different way to do politics.
In this model, technology is used to make democratic processes stronger, instead of undermining them. It is harnessed to root out corruption, instead of serving as a cash cow for campaign donations.
Imagine an election where every voter has the opportunity to opine directly to politicians on precisely the issues they care about. They’re not expected to spend hours becoming policy experts. Instead, an AI Interviewer walks them through the subject, answering their questions, interrogating their experience, even challenging their thinking. – Read More
Vibe physics: The AI grad student
There has been a lot of recent hype about AI scientists doing end-to-end research autonomously. In August 2024, Sakana AI released their AI Scientist, a system designed to automate the entire research lifecycle—from generating hypotheses to writing papers. In February 2025, Google released an AI co-scientist built on Gemini, promising to help researchers generate and evaluate hypotheses at scale. And in August 2025, the Allen Institute for AI (Ai2) launched the open-source Asta ecosystem, featuring tools like CodeScientist and AutoDiscovery to find patterns in complex datasets. Since then, a new entrant has appeared every few months—FutureHouse’s Kosmos, the Autoscience Institute’s Carl, the Simons Foundation’s Denario project, among others—each promising some version of end-to-end autonomous research. Even as these approaches are visionary, their successes to date seem a bit forced: run hundreds or thousands of trials and define the best one as interesting. While I believe we are not far from end-to-end science, I’m not convinced we can skip the intermediate steps. Maybe LLMs need to go to graduate school before advancing straight to the Ph.D.
… What about theoretical physics? End-to-end AI scientists have found their footing in data-rich domains, but theoretical physics is not one of them. Unlike mathematics, theoretical physics problems can be more nebulous—less about formal proof search and more about physical intuition, choosing the right approximations, and navigating a landscape of subtleties that often trip up even experienced researchers. Even so, there are problems in physics where AI might be better suited. Not yet the paradigm-shifting questions at the frontier, but those where the conceptual framework is established and the goal well-defined. To find out if AI can solve these types of theory problems, I supervised Claude through a real research calculation at the level of a second-year grad student. — Read More
6 innovation curves are rewriting enterprise IT strategy
Enterprise transformation doesn’t happen overnight, nor does it typically happen all at once. Yet sometimes business leaders must confront the reality of simultaneous technology shifts. Each shift follows its own roadmap and requires attention to ensure that changes aren’t too disruptive. To ensure smooth sailing, businesses must manage parallel changes that evolve.
Today’s business landscape is unique in that digital innovation is advancing rapidly, and sudden advances in artificial intelligence (AI) are shifting management philosophies in real time. For IT leaders who generally adjust to transformations in sequence – optimize one area, then move to the next – the challenge becomes adjusting rapidly to monumental technology shifts. The organizations that will thrive are the ones that intentionally adapt to simultaneous changes. This includes building operating models, architectures and governance designs that can easily adjust to simultaneous changes. — Read More
Designing AI for Disruptive Science
In On Exactitude in Science, the writer Jorge Luis Borges imagines an empire so devoted to cartography that its mapmakers draw a map as large and detailed as the empire itself. “In the Deserts of the West, still today, there are Tattered Ruins of that Map,” Borges writes, “inhabited by Animals and Beggars.” Borges’s map is a parable for knowledge, and one of its lessons is that too much detail can quickly become impractical — a map at that scale would be perfect but useless.
But with today’s AI systems, one might wonder if such a map is so absurd after all. Computers and the Internet have already helped us to digitize much of human knowledge, and AI enables us to scan it quickly and easily. For instance, large language models are trained on trillions of words spanning much of recorded human knowledge. In biology, systems like AlphaFold learn from large databases to predict a protein’s folded structure from its amino acid sequence. — Read More
The Future of SaaS Is Agentic
The future of SaaS is agentic, but agentic SaaS is not just a chatbot layered on top of APIs and a dashboard. Traditional SaaS was built for users to operate software manually; agentic SaaS shifts that burden to software that can act on behalf of users. That changes both the interface and the architecture: the UI remains, but becomes a layer for intent, supervision, and review, while the product itself evolves into a system of stateful processes that can plan, execute, and adapt over time. The winners will not be the products with the most AI features, but the ones that remove the most friction and make software feel less like a tool to operate and more like a system that works for you. — Read More
OpenAI is throwing everything into building a fully automated researcher
OpenAI is refocusing its research efforts and throwing its resources into a new grand challenge. The San Francisco firm has set its sights on building what it calls an AI researcher, a fully automated agent-based system that will be able to go off and tackle large, complex problems by itself. OpenAI says that this new research goal will be its “North Star” for the next few years, pulling together multiple research strands, including work on reasoning models, agents, and interpretability.
There’s even a timeline. OpenAI plans to build “an autonomous AI research intern”—a system that can take on a small number of specific research problems by itself—by September. The AI intern will be the precursor to a fully automated multi-agent research system that the company plans to debut in 2028. — Read More
World Models: Computing the Uncomputable
… I am on the record as being skeptical that LLMs will take us to superintelligence, but I think there is a real shot that World Models will drive superhuman, complementary machines that do things that we can’t, or don’t want to, do.
… The world is a place where unexpected futures unfold, but in somewhat predictable ways. As humans, we can envision almost all of them with roughly the same amount of effort with a very similar amount of time given to each thought. Computers can’t.
It’s no wonder traditional computing struggles with this complexity. Imagine anticipating and coding each and every action, as well as the interactions between all of those actions. Mathematically, in a traditional engine, simulating N fans is at least an O(N) or O(N2) problem. Each person, flag, chair, and ball must be explicitly calculated — and really, the interactions between them need to be calculated, too.
In robotics, machines must respond to situations in the real world in the same amount of time, regardless of their complexity, even though, in traditional computing, different situations can take wildly different amounts of time to simulate. This has been a major bottleneck for robotics and embodied AI progress.
World Models are a solution to that problem. — Read More
Enterprise AI Has a Checkbox Problem
… Today, AI sits adjacent to the work. It assists. It suggests. It drafts. But it doesn’t run the operating room, underwrite the loan, or manage the supply chain. Not in production. Not yet.
… “You can’t just slot [AI] in to a critical workflow in health care and all of a sudden show up where if you make a misdiagnosis or if you make a mischaracterization of a procedure, you can get fined and go to jail. If you’re in financial services and you make a mistake about somebody’s portfolio, or you make a misallocation and you point to a model, you will get sued and you will be in trouble.”
So what does every responsible enterprise do? They experiment at the edge. They run pilots. They check the box. They wait. — Read More
Five strategies for deeper AI adoption at work
Why do some people become enthusiastic, consistent adopters of AI, while others give it a try and shrug? We collaborated with Stanford University researchers to find out.
Over the last 18 months, we took the researchers behind the curtain at Google to observe how Googlers were learning and using AI in their day-to-day work. The timing of the study allowed us to observe firsthand how the rapid pace of AI was fundamentally challenging and changing how we build, collaborate and lead.
The published study found that while most people were eager to find value in AI tools, many were stuck in what the researchers called “simple substitution”: swapping existing tasks for AI alternatives. But many found the effort it took to learn the AI tool and get to a good result was often greater than the payoff. Crucially, the researchers found that successful adopters didn’t just focus on prompt engineering or its more recent sibling, context engineering. Instead, deep AI adopters completely changed how they approached AI — taking inspiration from product management. — Read More