In an always-on environment of Slack pings, email floods, and meeting overload, the scarcest resource isn’t information or compute—it’s sustained human attention. This article argues that deep work—distraction-free, cognitively demanding, value-creating effort—is now core infrastructure for modern high performance. Drawing on research in attention, task switching, interruptions, and flow, it explains why “multitasking” is actually rapid context switching that slows delivery, increases defects, and spikes stress. It then connects focus to hard business outcomes: fewer incidents, faster recovery, better code, higher throughput, and improved retention. Practical sections translate the science into playbooks for individuals, teams, and leaders—covering how to measure deep work, protect maker time, fix meeting and communication norms, and overcome cultural resistance to being “less available.” The conclusion is simple: in an AI-heavy, always-on world, organizations that systematically protect deep work will ship better work, with saner teams, at lower real cost. — Read More
Daily Archives: November 26, 2025
Scientists identify five ages of the human brain over a lifetime
Neuroscientists at the University of Cambridge have identified five “major epochs” of brain structure over the course of a human life, as our brains rewire to support different ways of thinking while we grow, mature, and ultimately decline.
A study led by Cambridge’s MRC Cognition and Brain Sciences Unit compared the brains of 3,802 people between zero and ninety years old using datasets of MRI diffusion scans, which map neural connections by tracking how water molecules move through brain tissue.
In a study published in Nature Communications, scientists say they detected five broad phases of brain structure in the average human life, split up by four pivotal “turning points” between birth and death when our brains reconfigure. — Read More
Ilya Sutskever: AI’s bottleneck is ideas, not compute
Ilya Sutskever, in a rare interview with Dwarkesh Patel, laid out his sharp critique of the AI industry. He argues that reliance on brute-force “scaling” has hit a wall. While AI models may be brilliant on tests, they are fragile in terms of real-world applications. He believes the pursuit of general intelligence must now shift from simply gathering more data to discovering a new, more efficient scientific principles. — Read More
#strategyVibe Check: Opus 4.5 Is the Coding Model We’ve Been Waiting For
It’s appropriate that this week is Thanksgiving, because Anthropic just dropped the best coding model we’ve ever used: Claude Opus 4.5.
We’ve been testing Opus 4.5 over the last few days on everything from vibe coded iOS apps to production codebases. It manages to be both great at planning—producing readable, intuitive, and user-focused plans—and coding. It’s highly technical and also human. We haven’t been this enthusiastic about a coding model since Anthropic’s Sonnet 3.5 dropped in June 2024.
The most significant thing about Opus 4.5 is that it extends the horizon of what you can realistically vibe code. The current generation of new models—Anthropic’s Sonnet 4.5, Google’s Gemini 3, or OpenAI’s Codex Max 5.1—can all competently build a minimum viable product in one shot, or fix a highly technical bug autonomously. But eventually, if you kept pushing them to vibe code more, they’d start to trip over their own feet: The code would be convoluted and contradictory, and you’d get stuck in endless bugs. We have not found that limit yet with Opus 4.5—it seems to be able to vibe code forever. — Read More
Producer Theory
… One [theory] is that integrating all of this data together is extremely valuable, and that the rush to do it—according to The Information, every major enterprise software company is building an “enterprise search” agent—is a very sensible war for a very strategic space. Google became the fourth biggest company in the world by being the front door for the internet; of course everyone wants to be the front door for work. And this messiness is just an intermediate state, until someone wins or we all run out of money.
A second theory, however, is that platforms aren’t as valuable as we think they are. For a decade now, Silicon Valley has come to accept, nearly as a matter of law, that the aggregators are the internet’s biggest winners. But aggregation theory5 assumes that production flows cleanly from left to right: From producers, to distributors, to consumers, with the potential for gatekeepers along the way. “Context”—especially if MCP succeeds in making it easy for one tool to talk to another—is not like that. Slack aggregates what Notion knows; Notion aggregates what Slack knows; ChatGPT aggregates what everyone knows, and everyone uses ChatGPT to aggregate everything. Producers are consumers, consumers become producers, and everyone is a distributor. There aren’t people orderly walking into one big front door; there are agents crisscrossing through hundreds of side doors.
In that telling, the right analogy for context isn’t content, but knowledge. — Read More
Meet the new Chinese vibe coding app that’s so popular, one of its tools crashed
A Chinese vibe coding tool went viral so fast that its signature feature crashed just days after it launched.
LingGuang, an AI app for vibe coding and building apps using plain-language prompts, launched last Tuesday and reached over 1 million downloads in four days. By Monday, the app had crossed 2 million downloads, said Chinese tech group Ant Group, which built the AI coding assistant tool.
On Monday, LingGuang ranked first on Apple’s mainland China App Store for free utilities apps and sixth overall for free apps. — Read More