The Iceberg Index: Measuring Workforce Exposure Across the AI Economy

Artificial Intelligence is reshaping America’s $9.4 trillion labor market, with cascading effects that extend far beyond visible technology sectors. When AI transforms quality control tasks in automotive plants, consequences spread through logistics networks, supply chains, and local service economies. Yet traditional workforce metrics cannot capture these ripple effects: they measure employment outcomes after disruption occurs, not where AI capabilities overlap with human skills before adoption crystallizes. Project Iceberg addresses this gap using Large Population Models to simulate the human-AI labor market, representing 151 million workers as autonomous agents executing over 32,000 skills and interacting with thousands of AI tools. It introduces the Iceberg Index, a skills-centered metric that measures the wage value of skills AI systems can perform within each occupation. The Index captures technical exposure, where AI can perform occupational tasks, not displacement outcomes or adoption timelines. Analysis shows that visible AI adoption concentrated in computing and technology (2.2% of wage value, approx $211 billion) represents only the tip of the iceberg. Technical capability extends far below the surface through cognitive automation spanning administrative, financial, and professional services (11.7%, approx $1.2 trillion). This exposure is fivefold larger and geographically distributed across all states rather than confined to coastal hubs. Traditional indicators such as GDP, income, and unemployment explain less than 5% of this skills-based variation, underscoring why new indices are needed to capture exposure in the AI economy. By simulating how these capabilities may spread under scenarios, Iceberg enables policymakers and business leaders to identify exposure hotspots, prioritize investments, and test interventions before committing billions to implementation. — Read More

#strategy

Deep Work in an Always-On World: How Focus Becomes Your Unfair Advantage

In an always-on environment of Slack pings, email floods, and meeting overload, the scarcest resource isn’t information or compute—it’s sustained human attention. This article argues that deep work—distraction-free, cognitively demanding, value-creating effort—is now core infrastructure for modern high performance. Drawing on research in attention, task switching, interruptions, and flow, it explains why “multitasking” is actually rapid context switching that slows delivery, increases defects, and spikes stress. It then connects focus to hard business outcomes: fewer incidents, faster recovery, better code, higher throughput, and improved retention. Practical sections translate the science into playbooks for individuals, teams, and leaders—covering how to measure deep work, protect maker time, fix meeting and communication norms, and overcome cultural resistance to being “less available.” The conclusion is simple: in an AI-heavy, always-on world, organizations that systematically protect deep work will ship better work, with saner teams, at lower real cost. — Read More

#human

Scientists identify five ages of the human brain over a lifetime

Neuroscientists at the University of Cambridge have identified five “major epochs” of brain structure over the course of a human life, as our brains rewire to support different ways of thinking while we grow, mature, and ultimately decline.

A study led by Cambridge’s MRC Cognition and Brain Sciences Unit compared the brains of 3,802 people between zero and ninety years old using datasets of MRI diffusion scans, which map neural connections by tracking how water molecules move through brain tissue.

In a study published in Nature Communications, scientists say they detected five broad phases of brain structure in the average human life, split up by four pivotal “turning points” between birth and death when our brains reconfigure. — Read More

#human

Ilya Sutskever: AI’s bottleneck is ideas, not compute

Ilya Sutskever, in a rare interview with Dwarkesh Patel, laid out his sharp critique of the AI industry. He argues that reliance on brute-force “scaling” has hit a wall. While AI models may be brilliant on tests, they are fragile in terms of real-world applications. He believes the pursuit of general intelligence must now shift from simply gathering more data to discovering a new, more efficient scientific principles. — Read More

#strategy

Vibe Check: Opus 4.5 Is the Coding Model We’ve Been Waiting For

It’s appropriate that this week is Thanksgiving, because Anthropic just dropped the best coding model we’ve ever used: Claude Opus 4.5.

We’ve been testing Opus 4.5 over the last few days on everything from vibe coded iOS apps to production codebases. It manages to be both great at planning—producing readable, intuitive, and user-focused plans—and coding. It’s highly technical and also human. We haven’t been this enthusiastic about a coding model since Anthropic’s Sonnet 3.5 dropped in June 2024.

The most significant thing about Opus 4.5 is that it extends the horizon of what you can realistically vibe code. The current generation of new models—Anthropic’s Sonnet 4.5, Google’s Gemini 3, or OpenAI’s Codex Max 5.1—can all competently build a minimum viable product in one shot, or fix a highly technical bug autonomously. But eventually, if you kept pushing them to vibe code more, they’d start to trip over their own feet: The code would be convoluted and contradictory, and you’d get stuck in endless bugs. We have not found that limit yet with Opus 4.5—it seems to be able to vibe code forever. — Read More

#devops

Producer Theory

… One [theory] is that integrating all of this data together is extremely valuable, and that the rush to do it—according to The Informationevery major enterprise software company is building an “enterprise search” agent—is a very sensible war for a very strategic space. Google became the fourth biggest company in the world by being the front door for the internet; of course everyone wants to be the front door for work. And this messiness is just an intermediate state, until someone wins or we all run out of money.

A second theory, however, is that platforms aren’t as valuable as we think they are. For a decade now, Silicon Valley has come to accept, nearly as a matter of law, that the aggregators are the internet’s biggest winners. But aggregation theory5 assumes that production flows cleanly from left to right: From producers, to distributors, to consumers, with the potential for gatekeepers along the way. “Context”—especially if MCP succeeds in making it easy for one tool to talk to another—is not like that. Slack aggregates what Notion knows; Notion aggregates what Slack knows; ChatGPT aggregates what everyone knows, and everyone uses ChatGPT to aggregate everything. Producers are consumers, consumers become producers, and everyone is a distributor. There aren’t people orderly walking into one big front door; there are agents crisscrossing through hundreds of side doors.

In that telling, the right analogy for context isn’t content, but knowledge. — Read More

#strategy

Meet the new Chinese vibe coding app that’s so popular, one of its tools crashed

A Chinese vibe coding tool went viral so fast that its signature feature crashed just days after it launched.

LingGuang, an AI app for vibe coding and building apps using plain-language prompts, launched last Tuesday and reached over 1 million downloads in four days. By Monday, the app had crossed 2 million downloads, said Chinese tech group Ant Group, which built the AI coding assistant tool.

On Monday, LingGuang ranked first on Apple’s mainland China App Store for free utilities apps and sixth overall for free apps. — Read More

#china-ai

How Google Pulled Off Its Stunning, Rapid-Fire AI Turnaround

Google came into 2025 with its AI stumbles looming large. The company’s slow start to the generative AI race turned borderline catastrophic in 2024 when its products generated images of diverse Nazis, told users to eat rocks, and couldn’t match OpenAI’s shine. AI chat was seen as a major threat to search, and outsiders didn’t see a coherent strategy. In January, Google stock was on the sale rack and murmurs about CEO Sundar Pichai’s job security floated around the internet.

We’re not quite in December and Google has masterfully reversed course. Its AI models are world class. Its products are buzzy again. Its cloud business is booming. And search is stronger than ever. Its stock is up 56% this year and, at $3.59 trillion, it just surpassed Microsoft’s market cap. Now, no serious person would question Pichai’s job status. – Read More

#big7

AI Models Are Becoming a Commodity: Are You Ready for the 5 Second-Order Effects Reshaping Industries by 2026?

As AI models become as common as electricity, the real competitive advantage shifts. Discover the five critical second-order effects of AI commoditization and learn how industries are preparing for a transformed business landscape in 2026.

For the last several years, the conversation around artificial intelligence has been dominated by a narrative of scarcity and exclusive power. Having access to a state-of-the-art AI model was a golden ticket, a competitive moat that only a handful of tech behemoths could afford to build. That era is rapidly coming to a close. We are now entering the age of AI commoditization, where powerful models are becoming a standardized, widely accessible utility—much like cloud computing or electricity before them

This seismic shift is being accelerated by fierce market competition, the proliferation of high-performance open-source models, and aggressive pricing from major cloud providers. The first-order effects are already visible and dramatic. We’re seeing a race to the bottom on pricing, with some analyses showing that the cost of using top-tier models dropped by over 80% in just one year. This democratization of access is just the beginning. — Read More

#strategy

A tsunami of COGS

The AI industry is in correction mode. Last week Nvidia reported its earnings and the world was holding its breath. If they miss, it is so over. If they crush it, we are so back. In the end, earnings beat expectations, but the stock slid anyway after an initial bump. Many things in the AI boom smell bad. The way money fuels the investment spree is quite questionable and it has become a meme, with the same $1T investment check moving hands among a small set of participants. We can call this “vendor financing”, but it is not a great look.

In my opinion the players more at risk here are the hyperscalers like Microsoft, Amazon and Oracle, and the neocloud players like Nebius and CoreWeave. They are in between the providers of chips like Nvidia and the buyers of compute like OpenAI. They really have no choice other than buying real chips from Nvidia, and hoping that there will be sustainable demand (read: revenue > COGS) from buyers of compute so that they can honor those commitments. If not, the buyers of compute will walk away, resizing their commitments (or going bankrupt), Nvidia already sold those GPUs, and the hyperscalers are left with billions and gigawatts of unused capacity that depreciate very quickly due to the short GPU lifespan. — Read More

#investing