Is AI a bubble?

A month ago, I set out to answer a deceptively simple question: Is AI a bubble?

Since 2024, people have been asking me this as I’ve spoken at events around the world.

Even as Wall Street bankers largely see this as an investment boom, more people are asking the question in meeting rooms and conference halls in Europe and the US.

Some have made up their minds. 

Gary Marcus called it a “peak bubble.” The Atlantic warns that there is a “possibility that we’re currently experiencing an AI bubble, in which investor excitement has gotten too far ahead of the technology’s near-term productivity benefits. If that bubble bursts, it could put the dot-com crash to shame – and the tech giants and their Silicon Valley backers won’t be the only ones who suffer.” The Economist said that “the potential cost has risen alarmingly high.”

The best way to understand a question like this is to create a framework, one that you can update as new evidence emerges. Putting this together has taken dozens of hours of data analysis, modeling and numerous conversations with investors and executives.

This essay is that framework: five gauges to weigh genAI against history’s bubbles. — Read More

#investing

Why America Builds AI Girlfriends and China Makes AI Boyfriends

On September 11, the U.S. Federal Trade Commission launched an inquiry into seven tech companies that make AI chatbot companion products, including Meta, OpenAI, and Character AI, over concerns that AI chatbots may prompt users, “especially children and teens,” to trust them and form unhealthy dependencies.

Four days later, China published its AI Safety Governance Framework 2.0, explicitly listing “addiction and dependence on anthropomorphized interaction (拟人化交互的沉迷依赖)” among its top ethical risks, even above concerns about AI loss of control. Interestingly, directly following the addiction risk is the risk of “challenging existing social order (挑战现行社会秩序),” including traditional “views on childbirth (生育观).”

What makes AI chatbot interaction so concerning? Why is the U.S. more worried about child interaction, whereas the Chinese government views AI companions as a threat to family-making and childbearing? The answer lies in how different societies build different types of AI companions, which then create distinct societal risks. Drawing from an original market scan of 110 global AI companion platforms and analysis of China’s domestic market, I explore here shows how similar AI technologies produce vastly different companion experiences—American AI girlfriends versus Chinese AI boyfriends—when shaped by cultural values, regulatory frameworks, and geopolitical tensions. — Read More

#china-vs-us

Hierarchical Reasoning Model

Reasoning, the process of devising and executing complex goal-oriented action sequences, remains a critical challenge in AI. Current large language models (LLMs) primarily employ Chain-of-Thought (CoT) techniques, which suffer from brittle task decomposition, extensive data requirements, and high latency. Inspired by the hierarchical and multi-timescale processing in the human brain, we propose the Hierarchical Reasoning Model (HRM), a novel recurrent architecture that attains significant computational depth while maintaining both training stability and efficiency. HRM executes sequential reasoning tasks in a single forward pass without explicit supervision of the intermediate process, through two interdependent recurrent modules: a high-level module responsible for slow, abstract planning, and a low-level module handling rapid, detailed computations. With only 27 million parameters, HRM achieves exceptional performance on complex reasoning tasks using only 1000 training samples. The model operates without pre-training or CoT data, yet achieves nearly perfect performance on challenging tasks including complex Sudoku puzzles and optimal path finding in large mazes. Furthermore, HRM outperforms much larger models with significantly longer context windows on the Abstraction and Reasoning Corpus (ARC), a key benchmark for measuring artificial general intelligence capabilities. These results underscore HRM’s potential as a transformative advancement toward universal computation and general-purpose reasoning systems. — Read More

#human

Scientists just developed a new AI modeled on the human brain — it’s outperforming LLMs like ChatGPT at reasoning tasks

The hierarchical reasoning model (HRM) system is modeled on the way the human brain processes complex information, and it outperformed leading LLMs in a notoriously hard-to-beat benchmark.

Scientists have developed a new type of artificial intelligence (AI) model that can reason differently from most large language models (LLMs) like ChatGPT, resulting in much better performance in key benchmarks.

The new reasoning AI, called a hierarchical reasoning model (HRM), is inspired by the hierarchical and multi-timescale processing in the human brain — the way different brain regions integrate information over varying durations (from milliseconds to minutes). — Read More

#human