The Adolescence of Technology

There is a scene in the movie version of Carl Sagan’s book Contact where the main character, an astronomer who has detected the first radio signal from an alien civilization, is being considered for the role of humanity’s representative to meet the aliens. The international panel interviewing her asks, “If you could ask [the aliens] just one question, what would it be?” Her reply is: “I’d ask them, ‘How did you do it? How did you evolve, how did you survive this technological adolescence without destroying yourself?” When I think about where humanity is now with AI—about what we’re on the cusp of—my mind keeps going back to that scene, because the question is so apt for our current situation, and I wish we had the aliens’ answer to guide us. I believe we are entering a rite of passage, both turbulent and inevitable, which will test who we are as a species. Humanity is about to be handed almost unimaginable power, and it is deeply unclear whether our social, political, and technological systems possess the maturity to wield it. — Read More

#human

Building Brains on a Computer

I first heard people seriously discussing the prospect of “running” a brain in silico back in 2023. Their aim was to emulate, or replicate, all the biological processes of a human brain entirely on a computer.

In that same year, the Wellcome Trust released a report on what it would take to map the mouse connectome: all 70 million neurons. They estimated that imaging would cost $200-300 million and that human proofreading, or ensuring that automated traces between neurons were correct, would cost an additional $7-21 billion. Collecting the images would require 20 electron microscopes running continuously, in parallel, for about five years and occupy about 500 petabytes. The report estimated that mapping the full mouse connectome would take up to 17 years of work.

Given this projection — not to mention the added complexity of scaling this to human brains — I remember finding the idea of brain emulation absurd. Without a map of how neurons in the brain connect with each other, any effort to emulate a brain computationally would prove impossible. But after spending the past year researching the possibility (and writing a 175-page report about it), I’ve updated my views. — Read More

#human

A multimodal sleep foundation model for disease prediction

Sleep is a fundamental biological process with broad implications for physical and mental health, yet its complex relationship with disease remains poorly understood. Polysomnography (PSG)—the gold standard for sleep analysis—captures rich physiological signals but is underutilized due to challenges in standardization, generalizability and multimodal integration. To address these challenges, we developed SleepFM, a multimodal sleep foundation model trained with a new contrastive learning approach that accommodates multiple PSG configurations. Trained on a curated dataset of over 585,000 hours of PSG recordings from approximately 65,000 participants across several cohorts, SleepFM produces latent sleep representations that capture the physiological and temporal structure of sleep and enable accurate prediction of future disease risk. From one night of sleep, SleepFM accurately predicts 130 conditions with a C-Index of at least 0.75 (Bonferroni-corrected P < 0.01), including all-cause mortality (C-Index, 0.84), dementia (0.85), myocardial infarction (0.81), heart failure (0.80), chronic kidney disease (0.79), stroke (0.78) and atrial fibrillation (0.78). Moreover, the model demonstrates strong transfer learning performance on a dataset from the Sleep Heart Health Study—a dataset that was excluded from pretraining—and performs competitively with specialized sleep-staging models such as U-Sleep and YASA on common sleep analysis tasks, achieving mean F1 scores of 0.70–0.78 for sleep staging and accuracies of 0.69 and 0.87 for classifying sleep apnea severity and presence. This work shows that foundation models can learn the language of sleep from multimodal sleep recordings, enabling scalable, label-efficient analysis and disease prediction. — Read More

#human

The Chinese Room Experiment— AI’s Meaning Problem

“The question is not whether machines can think, but whether men can.” — Joseph Weizenbaum (creator of ELIZA, first chatbot)

Imagine you’re in a locked room. You don’t speak a word of Chinese, but you have an enormous instruction manual written in English. Through a slot in the door, native Chinese speakers pass you questions written in Chinese characters. You consult your manual, it tells you: “When you see these symbols, write down those symbols in response.” You follow the rules perfectly, sliding beautifully composed Chinese answers back through the slot. To everyone outside, you appear fluent. But here’s the thing: you don’t understand a single word.

This is the Chinese Room, philosopher John Searle’s 1980 thought experiment that has ‘haunted’ artificial intelligence ever since. Today’s models produce increasingly sophisticated text, writing poetry, debugging code and also teach complex concepts. The uncomfortable question, then, is whether any of this counts as understanding; or are we just being impressed by extremely elaborate rule-following.Read More

#human

Exclusive: Connectome Pioneer Sebastian Seung Is Building A Digital Brain

On a Sunday evening earlier this month, a Stanford professor held a salon at her home near the university’s campus. The main topic for the event was “synthesizing consciousness through neuroscience,” and the home filled with dozens of people, including artificial intelligence researchers, doctors, neuroscientists, philosophers and a former monk, eager to discuss the current collision between new AI and biological tools and how we might identify the arrival of a digital consciousness.

The opening speaker for the salon was Sebastian Seung, and this made a lot of sense. Seung, a neuroscience and computer science professor at Princeton University, has spent much of the last year enjoying the afterglow of his (and others’) breakthrough research describing the inner workings of the fly brain. Seung, you see, helped create the first complete wiring diagram of a fly brain and its 140,000 neurons and 55 million synapses. (Nature put out a special issue last October to document the achievement and its implications.) This diagram, known as a connectome, took more than a decade to finish and stands as the most detailed look at the most complex whole brain ever produced.

… What Seung did not reveal to the audience is that the fly connectome has given rise to his own new neuroscience journey. This week, he’s unveiling a start-up called Memazing, as we can exclusively report. The new company seeks to create the technology needed to reverse engineer the fly brain (and eventually even more complex brains) and create full recreations – or emulations, as Seung calls them – of the brain in software. — Read More

#human

If a Meta AI model can read a brain-wide signal, why wouldn’t the brain?

Did you know migratory birds and sea turtles are able to navigate using the Earth’s magnetic field? It’s called magnetoreception. Basically, being able to navigate was evolutionarily advantageous, so life evolved ways to feel the Earth’s magnetic field. A LOT of ways. Like a shocking amount of ways.

It would seem evolution adores detecting magnetic fields. And it makes sense! A literal “sense of direction” is quite useful in staying alive – nearly all life benefits from it, including us.

We don’t totally understand how our magnetoreception works yet, but we know that it does. In 2019, some Caltech researchers put some people in a room shielded from the Earth’s magnetic field, with a big magnetic field generator in it. They hooked them up to an EEG, and watched what happened in their brains as they manipulated the magnetic field. The result: some of those people showed a response to the magnetic fields on the EEG!

That gets my noggin joggin. Our brain responds to magnetic field changes, but we aren’t aware of it? What if it affects our mood? Would you believe me if I told you lunar gravity influences the Earth’s magnetosphere? Perhaps I was too dismissive of astrology. — Read More

#human

Increasing alignment of large language models with language processing in the human brain

Transformer-based large language models (LLMs) have considerably advanced our understanding of how meaning is represented in the human brain; however, the validity of increasingly large LLMs is being questioned due to their extensive training data and their ability to access context thousands of words long. In this study we investigated whether instruction tuning—another core technique in recent LLMs that goes beyond mere scaling—can enhance models’ ability to capture linguistic information in the human brain. We compared base and instruction-tuned LLMs of varying sizes against human behavioral and brain activity measured with eye-tracking and functional magnetic resonance imaging during naturalistic reading. We show that simply making LLMs larger leads to a closer match with the human brain than fine-tuning them with instructions. These finding have substantial implications for understanding the cognitive plausibility of LLMs and their role in studying naturalistic language comprehension. — Read More

#human

Deep Work in an Always-On World: How Focus Becomes Your Unfair Advantage

In an always-on environment of Slack pings, email floods, and meeting overload, the scarcest resource isn’t information or compute—it’s sustained human attention. This article argues that deep work—distraction-free, cognitively demanding, value-creating effort—is now core infrastructure for modern high performance. Drawing on research in attention, task switching, interruptions, and flow, it explains why “multitasking” is actually rapid context switching that slows delivery, increases defects, and spikes stress. It then connects focus to hard business outcomes: fewer incidents, faster recovery, better code, higher throughput, and improved retention. Practical sections translate the science into playbooks for individuals, teams, and leaders—covering how to measure deep work, protect maker time, fix meeting and communication norms, and overcome cultural resistance to being “less available.” The conclusion is simple: in an AI-heavy, always-on world, organizations that systematically protect deep work will ship better work, with saner teams, at lower real cost. — Read More

#human

Scientists identify five ages of the human brain over a lifetime

Neuroscientists at the University of Cambridge have identified five “major epochs” of brain structure over the course of a human life, as our brains rewire to support different ways of thinking while we grow, mature, and ultimately decline.

A study led by Cambridge’s MRC Cognition and Brain Sciences Unit compared the brains of 3,802 people between zero and ninety years old using datasets of MRI diffusion scans, which map neural connections by tracking how water molecules move through brain tissue.

In a study published in Nature Communications, scientists say they detected five broad phases of brain structure in the average human life, split up by four pivotal “turning points” between birth and death when our brains reconfigure. — Read More

#human

The Space of Intelligence is Large (Andrej Karpathy)

Something I think people continue to have poor intuition for: The space of intelligences is large and animal intelligence (the only kind we’ve ever known) is only a single point, arising from a very specific kind of optimization that is fundamentally distinct from that of our technology. — Read More

#human