A multimodal sleep foundation model for disease prediction

Sleep is a fundamental biological process with broad implications for physical and mental health, yet its complex relationship with disease remains poorly understood. Polysomnography (PSG)—the gold standard for sleep analysis—captures rich physiological signals but is underutilized due to challenges in standardization, generalizability and multimodal integration. To address these challenges, we developed SleepFM, a multimodal sleep foundation model trained with a new contrastive learning approach that accommodates multiple PSG configurations. Trained on a curated dataset of over 585,000 hours of PSG recordings from approximately 65,000 participants across several cohorts, SleepFM produces latent sleep representations that capture the physiological and temporal structure of sleep and enable accurate prediction of future disease risk. From one night of sleep, SleepFM accurately predicts 130 conditions with a C-Index of at least 0.75 (Bonferroni-corrected P < 0.01), including all-cause mortality (C-Index, 0.84), dementia (0.85), myocardial infarction (0.81), heart failure (0.80), chronic kidney disease (0.79), stroke (0.78) and atrial fibrillation (0.78). Moreover, the model demonstrates strong transfer learning performance on a dataset from the Sleep Heart Health Study—a dataset that was excluded from pretraining—and performs competitively with specialized sleep-staging models such as U-Sleep and YASA on common sleep analysis tasks, achieving mean F1 scores of 0.70–0.78 for sleep staging and accuracies of 0.69 and 0.87 for classifying sleep apnea severity and presence. This work shows that foundation models can learn the language of sleep from multimodal sleep recordings, enabling scalable, label-efficient analysis and disease prediction. — Read More

#human

When Will They Take Our Jobs?

And once they take our jobs, will we be able to find new ones? Will AI take those too?

Seb Krier recently wrote an unusually good take on that, which will center this post.

I believe that Seb is being too optimistic on several fronts, but in a considered and highly reasonable way. The key is to understand the assumptions being made, and also to understand that he is only predicting that the era of employment optimism will last for 10-20 years. — Read More

#strategy

2026: This is AGI

Years ago, some leading researchers told us that their objective was AGI. Eager to hear a coherent definition, we naively asked “how do you define AGI?”. They paused, looked at each other tentatively, and then offered up what’s since become something of a mantra in the field of AI: “well, we each kind of have our own definitions, but we’ll know it when we see it.”

This vignette typifies our quest for a concrete definition of AGI. It has proven elusive.

While the definition is elusive, the reality is not. AGI is here, now.

Coding agents are the first example. There are more on the way.

Long-horizon agents are functionally AGI, and 2026 will be their year. — Read More

#investing

China’s Z.ai claims it trained a model using only Huawei hardware

Chinese outfit Zhipu AI claims it trained a new model entirely using Huawei hardware, and that it’s the first company to build an advanced model entirely on Chinese hardware.

Zhipu, which styles itself Z.ai and runs a chatbot at that address, offers several models named General Language Model (GLM). On Wednesday the company announced GLM-Image, that it says employs “an independently developed ‘autoregressive + diffusion decoder’ hybrid architecture, which enables the joint generation of image and language models.” represents an important advance on the Nano Banana Pro image-generating AI. — Read More

#china-ai

Vibe Coding Without System Design is a Trap

Lowering the barrier to creation has always been a net positive. WordPress turned anyone into a publisher. YouTube turned anyone into a broadcaster. Shopify turned anyone into an e-commerce operator. AI-assisted coding is doing the same for product building.

Let a thousand flowers bloom. I’m all in!

The problem: AI is very good at helping you build something. It’s not very good at helping you build something well.

The difference matters. — Read More

#devops

How the hell are you supposed to have a career in tech in 2026?

The number one question I get from my friends, acquaintances, and mentees in the technology industry these days is, by far, variations on the basic theme of, “what the hell are we supposed to do now?”

There have been mass layoffs that leave more tech workers than ever looking for new roles in the worst market we’ve ever seen. Many of the most talented, thoughtful and experienced people in the industry are feeling worried, confused, and ungrounded in a field that no longer looks familiar.

If you’re outside the industry, you may be confused — isn’t there an AI boom that’s getting hundreds of billions of dollars in investments? Doesn’t that mean the tech bros are doing great? What you may have missed is that half a million tech workers have been laid off in the years since ChatGPT was released; the same attacks on marginalized workers and DEI and “woke” that the tech robber barons launched against the rest of society were aimed at their own companies first. — Read More

#strategy

RedTeam-Tools

This github repository contains a collection of 150+ tools and resources that can be useful for red teaming activities.

Some of the tools may be specifically designed for red teaming, while others are more general-purpose and can be adapted for use in a red teaming context.

🔗 If you are a Blue Teamer, check out BlueTeam-Tools

Read More

#cyber

When AI Meets DevOps To Build Self-Healing Systems

Traditional DevOps, with its rule-based automation, is struggling to work effectively in today’s complex tech world. But when combined with AIOps it can lead to IT systems that predict failures and solve issues without human intervention.

In the fast-paced and ever-changing world of software development and IT operations, automation is a great asset. From CI/CD pipelines to provisioning infrastructure, DevOps has equipped teams to construct and deploy software faster than ever. But as systems become more complex, distributed, and data-rich, automation in isolation is not enough.

This is where artificial intelligence for IT operations (AIOps) enters the conversation. By embedding AI and machine learning with DevOps practices, AIOps shifts the paradigms beyond a workflow of defined rules. Not only does AIOps analyse data patterns and detect anomalies, it can also anticipate failures and take preemptive action with little or no human assistance. — Read More

#devops

AI can now ‘see’ optical illusions. What does it tell us about our own brains?

Our eyes can frequently play tricks on us, but scientists have discovered that some artificial intelligence can fall for the same illusions. And it is changing what we know about our brains.

When we look up at the Moon, it seems larger when it is close to the horizon compared to when it is higher up in the sky even though its size, and the distance between the Earth and the Moon, remains much the same during the course of a night.

Optical illusions such as these show that we don’t always perceive reality the way we should. They are often considered to be mistakes made by our visual system. But illusions also reveal the clever shortcuts our brains use to extract the most important details of our surroundings.

In truth, our brains only accept a sip of the world around us – it would be too much to process every detail of our busy visual environments, so instead they pick out only the details we need.  — Read More

#vision

Engram: How DeepSeek Added a Second Brain to Their LLM

When DeepSeek released their technical reports for V2 and V3, the ML community focused on the obvious innovations: massive parameter counts, clever load balancing, and Multi-head Latent Attention. But buried in their latest research is something that deserves more attention: a different way to think about what an LLM should remember.

The insight is deceptively simple. Large language models spend enormous computational effort reconstructing patterns they’ve seen millions of times before. The phrase “United States of” almost certainly ends with “America.” “New York” probably precedes “City” or “Times.” These patterns are burned into the training data, and the model learns them, but it learns them the hard way: by propagating gradients through billions of parameters across dozens of layers.

What if you could just look them up? — Read More

#performance