The Singularity has belonged exclusively to artificial minds, until now. For decades, whole-brain emulation has been the tantalizing counterpart to artificial intelligence: copy a biological brain, neuron by neuron and synapse by synapse, and run it. Today, for the first time, I am releasing a video from a company I helped found, Eon Systems PBC, demonstrating what we believe is the world’s first embodiment of a whole-brain emulation that produces multiple behaviors.
In 2024, Eon senior scientist Philip Shiu and collaborators published in Nature a computational model of the entire adult Drosophila melanogaster brain, containing more than 125,000 neurons and 50 million synaptic connections, built from the FlyWire connectome and machine learning predictions of neurotransmitter identity. That model predicted motor behavior at 95% accuracy. But it was disembodied: a brain without a body, activation without physics, motor outputs with nowhere to go.
Now the brain has somewhere to go. Building on previous work, including Shiu et al.’s whole-brain computational model, the NeuroMechFly v2 embodied simulation framework, and Özdil et al.’s research on centralized brain networks underlying body part coordination, this demonstration integrates Eon’s connectome-based brain emulation with a physics-simulated fly body in MuJoCo. — Read More
Tag Archives: Human
Compact deep neural network models of the visual cortex
A powerful approach to understand the computations carried out by the visual cortex is to build models that predict neural responses to any arbitrary image. Deep neural networks (DNNs) have emerged as the leading predictive models1,2, yet their underlying computations remain buried beneath millions of parameters. Here we challenge the need for models at this scale by seeking predictive and parsimonious DNN models of the primate visual cortex. We first built a highly predictive DNN model of neural responses in macaque visual area V4 by alternating data collection and model training in adaptive closed-loop experiments. We then compressed this large, black-box DNN model, which comprised 60 million parameters, to identify compact models with 5,000 times fewer parameters yet comparable accuracy. This dramatic compression enabled us to investigate the inner workings of the compact models. We discovered a salient computational motif: compact models share similar filters in early processing, but individual models then specialize their feature selectivity by ‘consolidating’ this shared high-dimensional representation in distinct ways. We examined this consolidation step in a dot-detecting model neuron, revealing a computational mechanism that leads to a testable circuit hypothesis for dot-selective V4 neurons. Beyond V4, we found strong model compression for macaque visual areas V1 and IT (inferior temporal cortex), revealing a general computational principle of the visual cortex. Overall, our work challenges the notion that large DNNs are necessary to predict individual neurons and establishes a modelling framework that balances prediction and parsimony. — Read More
BCIs in 2026: Still Janky, Still Dangerous, Still Overhyped
Alright, another year, another batch of venture capital pouring into ‘mind-reading’ startups that promise to turn your thoughts into Twitter threads. Frankly, it’s exhausting. We’re in 2026, and the fundamental problems that plagued Brain-Computer Interfaces (BCIs) a decade ago are still here, just wearing slightly shinier packaging. If you think we’re anywhere near seamless neural integration that lets you control a prosthetic arm with the fluidity of a natural limb, or hell, even reliably type at 60 WPM purely by thinking, you’ve been mainlining too much techbro hype. Let’s pull back the curtain on this circus, shall we? Because from an engineering perspective, most of what you hear is, generously, aspirational fiction. — Read More
Corollary Discharge Dysfunction to Inner Speech and its Relationship to Auditory Verbal Hallucinations in Patients with Schizophrenia Spectrum Disorders
Auditory-verbal hallucinations (AVH)—the experience of hearing voices in the absence of auditory stimulation—are a cardinal psychotic feature of schizophrenia-spectrum disorders. It has long been suggested that some AVH may reflect the misperception of inner speech as external voices due to a failure of corollary-discharge-related mechanisms. We aimed to test this hypothesis with an electrophysiological marker of inner speech.
… This study provides empirical support for the theory that AVH are related to abnormalities in the normative suppressive mechanisms associated with inner speech. This phenomenon of “inner speaking-induced suppression” may have utility as a biomarker for schizophrenia-spectrum disorders generally, and may index a tendency for AVH specifically at more extreme levels of abnormality. — Read More
The Adolescence of Technology
There is a scene in the movie version of Carl Sagan’s book Contact where the main character, an astronomer who has detected the first radio signal from an alien civilization, is being considered for the role of humanity’s representative to meet the aliens. The international panel interviewing her asks, “If you could ask [the aliens] just one question, what would it be?” Her reply is: “I’d ask them, ‘How did you do it? How did you evolve, how did you survive this technological adolescence without destroying yourself?” When I think about where humanity is now with AI—about what we’re on the cusp of—my mind keeps going back to that scene, because the question is so apt for our current situation, and I wish we had the aliens’ answer to guide us. I believe we are entering a rite of passage, both turbulent and inevitable, which will test who we are as a species. Humanity is about to be handed almost unimaginable power, and it is deeply unclear whether our social, political, and technological systems possess the maturity to wield it. — Read More
Building Brains on a Computer
I first heard people seriously discussing the prospect of “running” a brain in silico back in 2023. Their aim was to emulate, or replicate, all the biological processes of a human brain entirely on a computer.
In that same year, the Wellcome Trust released a report on what it would take to map the mouse connectome: all 70 million neurons. They estimated that imaging would cost $200-300 million and that human proofreading, or ensuring that automated traces between neurons were correct, would cost an additional $7-21 billion. Collecting the images would require 20 electron microscopes running continuously, in parallel, for about five years and occupy about 500 petabytes. The report estimated that mapping the full mouse connectome would take up to 17 years of work.
Given this projection — not to mention the added complexity of scaling this to human brains — I remember finding the idea of brain emulation absurd. Without a map of how neurons in the brain connect with each other, any effort to emulate a brain computationally would prove impossible. But after spending the past year researching the possibility (and writing a 175-page report about it), I’ve updated my views. — Read More
A multimodal sleep foundation model for disease prediction
Sleep is a fundamental biological process with broad implications for physical and mental health, yet its complex relationship with disease remains poorly understood. Polysomnography (PSG)—the gold standard for sleep analysis—captures rich physiological signals but is underutilized due to challenges in standardization, generalizability and multimodal integration. To address these challenges, we developed SleepFM, a multimodal sleep foundation model trained with a new contrastive learning approach that accommodates multiple PSG configurations. Trained on a curated dataset of over 585,000 hours of PSG recordings from approximately 65,000 participants across several cohorts, SleepFM produces latent sleep representations that capture the physiological and temporal structure of sleep and enable accurate prediction of future disease risk. From one night of sleep, SleepFM accurately predicts 130 conditions with a C-Index of at least 0.75 (Bonferroni-corrected P < 0.01), including all-cause mortality (C-Index, 0.84), dementia (0.85), myocardial infarction (0.81), heart failure (0.80), chronic kidney disease (0.79), stroke (0.78) and atrial fibrillation (0.78). Moreover, the model demonstrates strong transfer learning performance on a dataset from the Sleep Heart Health Study—a dataset that was excluded from pretraining—and performs competitively with specialized sleep-staging models such as U-Sleep and YASA on common sleep analysis tasks, achieving mean F1 scores of 0.70–0.78 for sleep staging and accuracies of 0.69 and 0.87 for classifying sleep apnea severity and presence. This work shows that foundation models can learn the language of sleep from multimodal sleep recordings, enabling scalable, label-efficient analysis and disease prediction. — Read More
The Chinese Room Experiment— AI’s Meaning Problem
“The question is not whether machines can think, but whether men can.” — Joseph Weizenbaum (creator of ELIZA, first chatbot)
Imagine you’re in a locked room. You don’t speak a word of Chinese, but you have an enormous instruction manual written in English. Through a slot in the door, native Chinese speakers pass you questions written in Chinese characters. You consult your manual, it tells you: “When you see these symbols, write down those symbols in response.” You follow the rules perfectly, sliding beautifully composed Chinese answers back through the slot. To everyone outside, you appear fluent. But here’s the thing: you don’t understand a single word.
This is the Chinese Room, philosopher John Searle’s 1980 thought experiment that has ‘haunted’ artificial intelligence ever since. Today’s models produce increasingly sophisticated text, writing poetry, debugging code and also teach complex concepts. The uncomfortable question, then, is whether any of this counts as understanding; or are we just being impressed by extremely elaborate rule-following. — Read More
Exclusive: Connectome Pioneer Sebastian Seung Is Building A Digital Brain
On a Sunday evening earlier this month, a Stanford professor held a salon at her home near the university’s campus. The main topic for the event was “synthesizing consciousness through neuroscience,” and the home filled with dozens of people, including artificial intelligence researchers, doctors, neuroscientists, philosophers and a former monk, eager to discuss the current collision between new AI and biological tools and how we might identify the arrival of a digital consciousness.
The opening speaker for the salon was Sebastian Seung, and this made a lot of sense. Seung, a neuroscience and computer science professor at Princeton University, has spent much of the last year enjoying the afterglow of his (and others’) breakthrough research describing the inner workings of the fly brain. Seung, you see, helped create the first complete wiring diagram of a fly brain and its 140,000 neurons and 55 million synapses. (Nature put out a special issue last October to document the achievement and its implications.) This diagram, known as a connectome, took more than a decade to finish and stands as the most detailed look at the most complex whole brain ever produced.
… What Seung did not reveal to the audience is that the fly connectome has given rise to his own new neuroscience journey. This week, he’s unveiling a start-up called Memazing, as we can exclusively report. The new company seeks to create the technology needed to reverse engineer the fly brain (and eventually even more complex brains) and create full recreations – or emulations, as Seung calls them – of the brain in software. — Read More
If a Meta AI model can read a brain-wide signal, why wouldn’t the brain?
Did you know migratory birds and sea turtles are able to navigate using the Earth’s magnetic field? It’s called magnetoreception. Basically, being able to navigate was evolutionarily advantageous, so life evolved ways to feel the Earth’s magnetic field. A LOT of ways. Like a shocking amount of ways.
It would seem evolution adores detecting magnetic fields. And it makes sense! A literal “sense of direction” is quite useful in staying alive – nearly all life benefits from it, including us.
We don’t totally understand how our magnetoreception works yet, but we know that it does. In 2019, some Caltech researchers put some people in a room shielded from the Earth’s magnetic field, with a big magnetic field generator in it. They hooked them up to an EEG, and watched what happened in their brains as they manipulated the magnetic field. The result: some of those people showed a response to the magnetic fields on the EEG!
That gets my noggin joggin. Our brain responds to magnetic field changes, but we aren’t aware of it? What if it affects our mood? Would you believe me if I told you lunar gravity influences the Earth’s magnetosphere? Perhaps I was too dismissive of astrology. — Read More