Junior Developers in the Age of AI

For a long time, we were all hand-wringing over the shortage of software developers. School districts rolled out coding curriculums. Colleges debuted software “labs”. “Bootcamps” became a $700m industry.

Today, we have the opposite problem. Thousands of trained, entry-level engineers that no one wants to hire. — Read More

#devops

AI’s Way Cooler Trillion-Dollar Opportunity: Vibe Graphs

The last generation of enterprise software became trillion-dollar platforms by owning what happened. Salesforce owns the customer record. Workday owns the employee record. SAP owns the operational record.

The next trillion-dollar opportunity? Owning what the vibe was when it happened.

We call this the vibes graph: a living record of ambient organizational sentiment, stitched across entities and time, so that vibe becomes queryable. — Read More

#strategy

1X unveils 1XWM world model for NEO robot platform

1X Technologies has announced the integration of its new video-pretrained world model, 1XWM, into its NEO robot platform. This development targets robotics researchers, developers, and early adopters interested in advanced home robots that navigate and act with human-like understanding. The initial release is for a limited group, primarily for research and internal evaluation, with broader commercial deployment expected following further validation.

he 1XWM model represents a technical shift from conventional vision-language-action (VLA) models by using internet-scale video pretraining combined with egocentric human and robot data. This model predicts robot actions by generating text-conditioned video rollouts, which are then translated into motion commands through an Inverse Dynamics Model. Unlike prior approaches, this method does not require tens of thousands of robot demonstration hours, enabling faster adaptation to new tasks.  — Read More

#robotics

A multimodal sleep foundation model for disease prediction

Sleep is a fundamental biological process with broad implications for physical and mental health, yet its complex relationship with disease remains poorly understood. Polysomnography (PSG)—the gold standard for sleep analysis—captures rich physiological signals but is underutilized due to challenges in standardization, generalizability and multimodal integration. To address these challenges, we developed SleepFM, a multimodal sleep foundation model trained with a new contrastive learning approach that accommodates multiple PSG configurations. Trained on a curated dataset of over 585,000 hours of PSG recordings from approximately 65,000 participants across several cohorts, SleepFM produces latent sleep representations that capture the physiological and temporal structure of sleep and enable accurate prediction of future disease risk. From one night of sleep, SleepFM accurately predicts 130 conditions with a C-Index of at least 0.75 (Bonferroni-corrected P < 0.01), including all-cause mortality (C-Index, 0.84), dementia (0.85), myocardial infarction (0.81), heart failure (0.80), chronic kidney disease (0.79), stroke (0.78) and atrial fibrillation (0.78). Moreover, the model demonstrates strong transfer learning performance on a dataset from the Sleep Heart Health Study—a dataset that was excluded from pretraining—and performs competitively with specialized sleep-staging models such as U-Sleep and YASA on common sleep analysis tasks, achieving mean F1 scores of 0.70–0.78 for sleep staging and accuracies of 0.69 and 0.87 for classifying sleep apnea severity and presence. This work shows that foundation models can learn the language of sleep from multimodal sleep recordings, enabling scalable, label-efficient analysis and disease prediction. — Read More

#human