The Man Behind Google’s AI Machine | Demis Hassabis Interview

Read More

#videos

Why We’ve Tried to Replace Developers Every Decade Since 1969

Every decade brings new promises: this time, we’ll finally make software development simple enough that we won’t need so many developers. From COBOL to AI, the pattern repeats. Business leaders grow frustrated with slow delivery and high costs. Developers feel misunderstood and undervalued. Understanding why this cycle persists for fifty years reveals what both sides need to know about the nature of software work. — Read More

#devops

The A in AGI stands for Ads

Here we go again, the tech press is having another AI doom cycle.

I’ve primarily written this as a response to an NYT analyst painting a completely unsubstantiated, baseless, speculative, outrageous, EGREGIOUS, preposterous “grim picture” on OpenAI going bust.

Mate come on. OpenAI is not dying, they’re not running out of money. Yes, they’re creating possibly the craziest circular economy and defying every economics law since Adam Smith published ‘The Wealth of Nations’. $1T in commitments is genuinely insane. But I doubt they’re looking to be acquired; honestly by who? you don’t raise $40 BILLION at $260 BILLION VALUATION to get acquired. It’s all for the $1T IPO. — Read More

#strategy

ChatGPT users are about to get hit with targeted ads

An ongoing conversation — both within and outside of the tech community — has been about just how and when OpenAI, which is currently valued at $500 billion, will make money. Well, there’s one surefire way to do that, and that is through advertising. In the near term, that seems to be the AI giant’s plan, as it announced this week that limited ads are headed to certain ChatGPT users.

In a blog post published Friday, OpenAI said that it will begin testing ads in the U.S. for both its free and Go tiers. (Go accounts, which cost $8 a month, were introduced globally on Friday.) The company frames this as a way to sustain free access while generating revenue from people who aren’t ready to commit to a paid subscription. For the time being, the company’s more expensive paid tiers — Pro, Plus, Business, and Enterprise — will not be getting any ads. — Read More

#chatbots

Junior Developers in the Age of AI

For a long time, we were all hand-wringing over the shortage of software developers. School districts rolled out coding curriculums. Colleges debuted software “labs”. “Bootcamps” became a $700m industry.

Today, we have the opposite problem. Thousands of trained, entry-level engineers that no one wants to hire. — Read More

#devops

AI’s Way Cooler Trillion-Dollar Opportunity: Vibe Graphs

The last generation of enterprise software became trillion-dollar platforms by owning what happened. Salesforce owns the customer record. Workday owns the employee record. SAP owns the operational record.

The next trillion-dollar opportunity? Owning what the vibe was when it happened.

We call this the vibes graph: a living record of ambient organizational sentiment, stitched across entities and time, so that vibe becomes queryable. — Read More

#strategy

1X unveils 1XWM world model for NEO robot platform

1X Technologies has announced the integration of its new video-pretrained world model, 1XWM, into its NEO robot platform. This development targets robotics researchers, developers, and early adopters interested in advanced home robots that navigate and act with human-like understanding. The initial release is for a limited group, primarily for research and internal evaluation, with broader commercial deployment expected following further validation.

he 1XWM model represents a technical shift from conventional vision-language-action (VLA) models by using internet-scale video pretraining combined with egocentric human and robot data. This model predicts robot actions by generating text-conditioned video rollouts, which are then translated into motion commands through an Inverse Dynamics Model. Unlike prior approaches, this method does not require tens of thousands of robot demonstration hours, enabling faster adaptation to new tasks.  — Read More

#robotics

A multimodal sleep foundation model for disease prediction

Sleep is a fundamental biological process with broad implications for physical and mental health, yet its complex relationship with disease remains poorly understood. Polysomnography (PSG)—the gold standard for sleep analysis—captures rich physiological signals but is underutilized due to challenges in standardization, generalizability and multimodal integration. To address these challenges, we developed SleepFM, a multimodal sleep foundation model trained with a new contrastive learning approach that accommodates multiple PSG configurations. Trained on a curated dataset of over 585,000 hours of PSG recordings from approximately 65,000 participants across several cohorts, SleepFM produces latent sleep representations that capture the physiological and temporal structure of sleep and enable accurate prediction of future disease risk. From one night of sleep, SleepFM accurately predicts 130 conditions with a C-Index of at least 0.75 (Bonferroni-corrected P < 0.01), including all-cause mortality (C-Index, 0.84), dementia (0.85), myocardial infarction (0.81), heart failure (0.80), chronic kidney disease (0.79), stroke (0.78) and atrial fibrillation (0.78). Moreover, the model demonstrates strong transfer learning performance on a dataset from the Sleep Heart Health Study—a dataset that was excluded from pretraining—and performs competitively with specialized sleep-staging models such as U-Sleep and YASA on common sleep analysis tasks, achieving mean F1 scores of 0.70–0.78 for sleep staging and accuracies of 0.69 and 0.87 for classifying sleep apnea severity and presence. This work shows that foundation models can learn the language of sleep from multimodal sleep recordings, enabling scalable, label-efficient analysis and disease prediction. — Read More

#human

When Will They Take Our Jobs?

And once they take our jobs, will we be able to find new ones? Will AI take those too?

Seb Krier recently wrote an unusually good take on that, which will center this post.

I believe that Seb is being too optimistic on several fronts, but in a considered and highly reasonable way. The key is to understand the assumptions being made, and also to understand that he is only predicting that the era of employment optimism will last for 10-20 years. — Read More

#strategy

2026: This is AGI

Years ago, some leading researchers told us that their objective was AGI. Eager to hear a coherent definition, we naively asked “how do you define AGI?”. They paused, looked at each other tentatively, and then offered up what’s since become something of a mantra in the field of AI: “well, we each kind of have our own definitions, but we’ll know it when we see it.”

This vignette typifies our quest for a concrete definition of AGI. It has proven elusive.

While the definition is elusive, the reality is not. AGI is here, now.

Coding agents are the first example. There are more on the way.

Long-horizon agents are functionally AGI, and 2026 will be their year. — Read More

#investing