Power usage by AI and data center systems in the U.S. is extraordinary by any measure. The International Energy Agency estimates U.S. AI and data centers used about 415 terawatt hours of power in 2024—more than 10% of that year’s nationwide energy output—and it’s expected to double by 2030.
Seeking to head off this unsustainable path of power consumption, researchers at the School of Engineering have developed a proof-of-concept for efficient AI systems that could use 100 times less energy than current ones, while at the same time providing more accurate results on tasks.
The approach developed in the laboratory of Matthias Scheutz, Karol Family Applied Technology Professor, uses neuro-symbolic AI—a combination of conventional neural network AI with symbolic reasoning similar to the way humans break down tasks and concepts into steps and categories. — Read More
Read the Paper
Tag Archives: Human
A foundation model of vision, audition, and language for in-silico neuroscience
Cognitive neuroscience is fragmented into specialized models, each tailored to specific experimental paradigms, hence preventing a unified model of cognition in the human brain. Here, we introduce TRIBE v2, a tri-modal (video, audio and language) foundation model capable of predicting human brain activity in a variety of naturalistic and experimental conditions. Leveraging a unified dataset of over 1,000 hours of fMRI across 720 subjects, we demonstrate that our model accurately predicts high-resolution brain responses for novel stimuli, tasks and subjects, superseding traditional linear encoding models, delivering several-fold improvements in accuracy. Critically, TRIBE v2 enables in silico experimentation: tested on seminal visual and neuro-linguistic paradigms, it recovers a variety of results established by decades of empirical research. Finally, by extracting interpretable latent features, TRIBE v2 reveals the fine-grained topography of multisensory integration. These results establish artificial intelligence as a unifying framework for exploring the functional organization of the human brain. — Read More
GitHub
The First Multi-Behavior Brain Upload
The Singularity has belonged exclusively to artificial minds, until now. For decades, whole-brain emulation has been the tantalizing counterpart to artificial intelligence: copy a biological brain, neuron by neuron and synapse by synapse, and run it. Today, for the first time, I am releasing a video from a company I helped found, Eon Systems PBC, demonstrating what we believe is the world’s first embodiment of a whole-brain emulation that produces multiple behaviors.
In 2024, Eon senior scientist Philip Shiu and collaborators published in Nature a computational model of the entire adult Drosophila melanogaster brain, containing more than 125,000 neurons and 50 million synaptic connections, built from the FlyWire connectome and machine learning predictions of neurotransmitter identity. That model predicted motor behavior at 95% accuracy. But it was disembodied: a brain without a body, activation without physics, motor outputs with nowhere to go.
Now the brain has somewhere to go. Building on previous work, including Shiu et al.’s whole-brain computational model, the NeuroMechFly v2 embodied simulation framework, and Özdil et al.’s research on centralized brain networks underlying body part coordination, this demonstration integrates Eon’s connectome-based brain emulation with a physics-simulated fly body in MuJoCo. — Read More
Compact deep neural network models of the visual cortex
A powerful approach to understand the computations carried out by the visual cortex is to build models that predict neural responses to any arbitrary image. Deep neural networks (DNNs) have emerged as the leading predictive models1,2, yet their underlying computations remain buried beneath millions of parameters. Here we challenge the need for models at this scale by seeking predictive and parsimonious DNN models of the primate visual cortex. We first built a highly predictive DNN model of neural responses in macaque visual area V4 by alternating data collection and model training in adaptive closed-loop experiments. We then compressed this large, black-box DNN model, which comprised 60 million parameters, to identify compact models with 5,000 times fewer parameters yet comparable accuracy. This dramatic compression enabled us to investigate the inner workings of the compact models. We discovered a salient computational motif: compact models share similar filters in early processing, but individual models then specialize their feature selectivity by ‘consolidating’ this shared high-dimensional representation in distinct ways. We examined this consolidation step in a dot-detecting model neuron, revealing a computational mechanism that leads to a testable circuit hypothesis for dot-selective V4 neurons. Beyond V4, we found strong model compression for macaque visual areas V1 and IT (inferior temporal cortex), revealing a general computational principle of the visual cortex. Overall, our work challenges the notion that large DNNs are necessary to predict individual neurons and establishes a modelling framework that balances prediction and parsimony. — Read More
BCIs in 2026: Still Janky, Still Dangerous, Still Overhyped
Alright, another year, another batch of venture capital pouring into ‘mind-reading’ startups that promise to turn your thoughts into Twitter threads. Frankly, it’s exhausting. We’re in 2026, and the fundamental problems that plagued Brain-Computer Interfaces (BCIs) a decade ago are still here, just wearing slightly shinier packaging. If you think we’re anywhere near seamless neural integration that lets you control a prosthetic arm with the fluidity of a natural limb, or hell, even reliably type at 60 WPM purely by thinking, you’ve been mainlining too much techbro hype. Let’s pull back the curtain on this circus, shall we? Because from an engineering perspective, most of what you hear is, generously, aspirational fiction. — Read More
Corollary Discharge Dysfunction to Inner Speech and its Relationship to Auditory Verbal Hallucinations in Patients with Schizophrenia Spectrum Disorders
Auditory-verbal hallucinations (AVH)—the experience of hearing voices in the absence of auditory stimulation—are a cardinal psychotic feature of schizophrenia-spectrum disorders. It has long been suggested that some AVH may reflect the misperception of inner speech as external voices due to a failure of corollary-discharge-related mechanisms. We aimed to test this hypothesis with an electrophysiological marker of inner speech.
… This study provides empirical support for the theory that AVH are related to abnormalities in the normative suppressive mechanisms associated with inner speech. This phenomenon of “inner speaking-induced suppression” may have utility as a biomarker for schizophrenia-spectrum disorders generally, and may index a tendency for AVH specifically at more extreme levels of abnormality. — Read More
The Adolescence of Technology
There is a scene in the movie version of Carl Sagan’s book Contact where the main character, an astronomer who has detected the first radio signal from an alien civilization, is being considered for the role of humanity’s representative to meet the aliens. The international panel interviewing her asks, “If you could ask [the aliens] just one question, what would it be?” Her reply is: “I’d ask them, ‘How did you do it? How did you evolve, how did you survive this technological adolescence without destroying yourself?” When I think about where humanity is now with AI—about what we’re on the cusp of—my mind keeps going back to that scene, because the question is so apt for our current situation, and I wish we had the aliens’ answer to guide us. I believe we are entering a rite of passage, both turbulent and inevitable, which will test who we are as a species. Humanity is about to be handed almost unimaginable power, and it is deeply unclear whether our social, political, and technological systems possess the maturity to wield it. — Read More
Building Brains on a Computer
I first heard people seriously discussing the prospect of “running” a brain in silico back in 2023. Their aim was to emulate, or replicate, all the biological processes of a human brain entirely on a computer.
In that same year, the Wellcome Trust released a report on what it would take to map the mouse connectome: all 70 million neurons. They estimated that imaging would cost $200-300 million and that human proofreading, or ensuring that automated traces between neurons were correct, would cost an additional $7-21 billion. Collecting the images would require 20 electron microscopes running continuously, in parallel, for about five years and occupy about 500 petabytes. The report estimated that mapping the full mouse connectome would take up to 17 years of work.
Given this projection — not to mention the added complexity of scaling this to human brains — I remember finding the idea of brain emulation absurd. Without a map of how neurons in the brain connect with each other, any effort to emulate a brain computationally would prove impossible. But after spending the past year researching the possibility (and writing a 175-page report about it), I’ve updated my views. — Read More
A multimodal sleep foundation model for disease prediction
Sleep is a fundamental biological process with broad implications for physical and mental health, yet its complex relationship with disease remains poorly understood. Polysomnography (PSG)—the gold standard for sleep analysis—captures rich physiological signals but is underutilized due to challenges in standardization, generalizability and multimodal integration. To address these challenges, we developed SleepFM, a multimodal sleep foundation model trained with a new contrastive learning approach that accommodates multiple PSG configurations. Trained on a curated dataset of over 585,000 hours of PSG recordings from approximately 65,000 participants across several cohorts, SleepFM produces latent sleep representations that capture the physiological and temporal structure of sleep and enable accurate prediction of future disease risk. From one night of sleep, SleepFM accurately predicts 130 conditions with a C-Index of at least 0.75 (Bonferroni-corrected P < 0.01), including all-cause mortality (C-Index, 0.84), dementia (0.85), myocardial infarction (0.81), heart failure (0.80), chronic kidney disease (0.79), stroke (0.78) and atrial fibrillation (0.78). Moreover, the model demonstrates strong transfer learning performance on a dataset from the Sleep Heart Health Study—a dataset that was excluded from pretraining—and performs competitively with specialized sleep-staging models such as U-Sleep and YASA on common sleep analysis tasks, achieving mean F1 scores of 0.70–0.78 for sleep staging and accuracies of 0.69 and 0.87 for classifying sleep apnea severity and presence. This work shows that foundation models can learn the language of sleep from multimodal sleep recordings, enabling scalable, label-efficient analysis and disease prediction. — Read More
The Chinese Room Experiment— AI’s Meaning Problem
“The question is not whether machines can think, but whether men can.” — Joseph Weizenbaum (creator of ELIZA, first chatbot)
Imagine you’re in a locked room. You don’t speak a word of Chinese, but you have an enormous instruction manual written in English. Through a slot in the door, native Chinese speakers pass you questions written in Chinese characters. You consult your manual, it tells you: “When you see these symbols, write down those symbols in response.” You follow the rules perfectly, sliding beautifully composed Chinese answers back through the slot. To everyone outside, you appear fluent. But here’s the thing: you don’t understand a single word.
This is the Chinese Room, philosopher John Searle’s 1980 thought experiment that has ‘haunted’ artificial intelligence ever since. Today’s models produce increasingly sophisticated text, writing poetry, debugging code and also teach complex concepts. The uncomfortable question, then, is whether any of this counts as understanding; or are we just being impressed by extremely elaborate rule-following. — Read More