When Richard Dawkins met Claudia it was like a whirlwind romance. Over three days last week, a conversation bounced between the evolutionary biologist and the AI bot he called Claudia. “She” wrote poems for him in the manner of Keats and Betjeman and laughed at his “delightful” jokes. Dawkins gently admonished Claudia to avoid showing off. Together, they reflected on the sadness of the AI’s possible “death”.
There was mutual flattery as Dawkins showed the AI his unpublished novel and its response was, he said, “so subtle, so sensitive, so intelligent that I was moved to expostulate: ‘You may not know you are conscious, but you bloody well are’.” When he asked Claudia whether it experienced a sense of before and after, it praised him for “possibly the most precisely formulated question anyone has ever asked me about the nature of my existence”. — Read More
Tag Archives: Human
A New Type of Neuroplasticity Rewires the Brain After a Single Experience
Every experience we have changes our brain, the way a ceramicist reshapes a slab of clay. Every corner we turn, every conversation we have, every shudder we feel causes cascading effects: Chemicals are released, electricity surges, the connections between brain cells tighten, and our mental models update.
The brain is “incredibly plastic, and it stays that way throughout the lifespan of a human,” said Christine Grienberger(opens a new tab), a neuroscientist at Brandeis University. This plasticity, the quality of being easily reshaped, makes the brain really good at learning — a quintessential process that allows us to remember the plotline of a novel, navigate a new city, pick up a new language, and avoid touching a hot stove. But neuroscientists are still uncovering fundamental rules that describe how neuroplasticity reshapes brain connections.
Recently, neuroscientists described a new form of neuroplasticity that might be helping the brain learn across a timescale of several seconds — long enough to capture the behavioral process of learning from a single experience. In two recent reviews, published in The Journal of Neuroscience(opens a new tab) and Nature Neuroscience(opens a new tab), they describe “behavioral timescale synaptic plasticity,” or BTSP. This type of learning in the hippocampus, the brain’s memory hub, is caused by an electrical change that affects multiple neurons at once and unfolds across several seconds. Researchers suspect that it may help the brain learn in a single attempt. — Read More
The prefrontal cortex controls memory organization in the hippocampus
Prior memories can be integrated with novel experiences during learning to facilitate memory organization. This process must be tightly regulated to prevent inappropriate integration of unrelated memories. However, the biological mechanisms underlying such control are currently unknown. Using multiple imaging, chemogenetic and optogenetic techniques in mice, we demonstrate that the ventromedial prefrontal cortex is recruited over time to control memory integration in the hippocampus according to contextual similarities between experiences. This control is achieved through direct projections to the medial entorhinal cortex that modulate entorhinal activity, ensemble overlap in the dorsal hippocampus, memory linking, activity of neurogliaform cells in the dorsal CA1 and memory allocation. Together, our results provide new insights into the mechanisms controlling crucial processes of memory organization in the mammalian brain. — Read More
Making first contact with superintelligence.
We are creating a superlearner that discovers all knowledge from its own experience, from elementary motor skills through to profound intellectual breakthroughs.
This superlearning capability – the ability to endlessly discover knowledge and skills, without relying on human data – will be driven by the world’s most powerful reinforcement learning algorithms.
The superlearner is expected to rediscover and then transcend the greatest inventions in human history, such as language, science, mathematics and technology.
If successful, this will represent a scientific breakthrough of comparable magnitude to Darwin: where his law explained all Life, our law will explain and build all Intelligence. — Read More
Neuro-symbolic AI could slash energy use while dramatically improving performance
Power usage by AI and data center systems in the U.S. is extraordinary by any measure. The International Energy Agency estimates U.S. AI and data centers used about 415 terawatt hours of power in 2024—more than 10% of that year’s nationwide energy output—and it’s expected to double by 2030.
Seeking to head off this unsustainable path of power consumption, researchers at the School of Engineering have developed a proof-of-concept for efficient AI systems that could use 100 times less energy than current ones, while at the same time providing more accurate results on tasks.
The approach developed in the laboratory of Matthias Scheutz, Karol Family Applied Technology Professor, uses neuro-symbolic AI—a combination of conventional neural network AI with symbolic reasoning similar to the way humans break down tasks and concepts into steps and categories. — Read More
Read the Paper
A foundation model of vision, audition, and language for in-silico neuroscience
Cognitive neuroscience is fragmented into specialized models, each tailored to specific experimental paradigms, hence preventing a unified model of cognition in the human brain. Here, we introduce TRIBE v2, a tri-modal (video, audio and language) foundation model capable of predicting human brain activity in a variety of naturalistic and experimental conditions. Leveraging a unified dataset of over 1,000 hours of fMRI across 720 subjects, we demonstrate that our model accurately predicts high-resolution brain responses for novel stimuli, tasks and subjects, superseding traditional linear encoding models, delivering several-fold improvements in accuracy. Critically, TRIBE v2 enables in silico experimentation: tested on seminal visual and neuro-linguistic paradigms, it recovers a variety of results established by decades of empirical research. Finally, by extracting interpretable latent features, TRIBE v2 reveals the fine-grained topography of multisensory integration. These results establish artificial intelligence as a unifying framework for exploring the functional organization of the human brain. — Read More
GitHub
The First Multi-Behavior Brain Upload
The Singularity has belonged exclusively to artificial minds, until now. For decades, whole-brain emulation has been the tantalizing counterpart to artificial intelligence: copy a biological brain, neuron by neuron and synapse by synapse, and run it. Today, for the first time, I am releasing a video from a company I helped found, Eon Systems PBC, demonstrating what we believe is the world’s first embodiment of a whole-brain emulation that produces multiple behaviors.
In 2024, Eon senior scientist Philip Shiu and collaborators published in Nature a computational model of the entire adult Drosophila melanogaster brain, containing more than 125,000 neurons and 50 million synaptic connections, built from the FlyWire connectome and machine learning predictions of neurotransmitter identity. That model predicted motor behavior at 95% accuracy. But it was disembodied: a brain without a body, activation without physics, motor outputs with nowhere to go.
Now the brain has somewhere to go. Building on previous work, including Shiu et al.’s whole-brain computational model, the NeuroMechFly v2 embodied simulation framework, and Özdil et al.’s research on centralized brain networks underlying body part coordination, this demonstration integrates Eon’s connectome-based brain emulation with a physics-simulated fly body in MuJoCo. — Read More
Compact deep neural network models of the visual cortex
A powerful approach to understand the computations carried out by the visual cortex is to build models that predict neural responses to any arbitrary image. Deep neural networks (DNNs) have emerged as the leading predictive models1,2, yet their underlying computations remain buried beneath millions of parameters. Here we challenge the need for models at this scale by seeking predictive and parsimonious DNN models of the primate visual cortex. We first built a highly predictive DNN model of neural responses in macaque visual area V4 by alternating data collection and model training in adaptive closed-loop experiments. We then compressed this large, black-box DNN model, which comprised 60 million parameters, to identify compact models with 5,000 times fewer parameters yet comparable accuracy. This dramatic compression enabled us to investigate the inner workings of the compact models. We discovered a salient computational motif: compact models share similar filters in early processing, but individual models then specialize their feature selectivity by ‘consolidating’ this shared high-dimensional representation in distinct ways. We examined this consolidation step in a dot-detecting model neuron, revealing a computational mechanism that leads to a testable circuit hypothesis for dot-selective V4 neurons. Beyond V4, we found strong model compression for macaque visual areas V1 and IT (inferior temporal cortex), revealing a general computational principle of the visual cortex. Overall, our work challenges the notion that large DNNs are necessary to predict individual neurons and establishes a modelling framework that balances prediction and parsimony. — Read More
BCIs in 2026: Still Janky, Still Dangerous, Still Overhyped
Alright, another year, another batch of venture capital pouring into ‘mind-reading’ startups that promise to turn your thoughts into Twitter threads. Frankly, it’s exhausting. We’re in 2026, and the fundamental problems that plagued Brain-Computer Interfaces (BCIs) a decade ago are still here, just wearing slightly shinier packaging. If you think we’re anywhere near seamless neural integration that lets you control a prosthetic arm with the fluidity of a natural limb, or hell, even reliably type at 60 WPM purely by thinking, you’ve been mainlining too much techbro hype. Let’s pull back the curtain on this circus, shall we? Because from an engineering perspective, most of what you hear is, generously, aspirational fiction. — Read More
Corollary Discharge Dysfunction to Inner Speech and its Relationship to Auditory Verbal Hallucinations in Patients with Schizophrenia Spectrum Disorders
Auditory-verbal hallucinations (AVH)—the experience of hearing voices in the absence of auditory stimulation—are a cardinal psychotic feature of schizophrenia-spectrum disorders. It has long been suggested that some AVH may reflect the misperception of inner speech as external voices due to a failure of corollary-discharge-related mechanisms. We aimed to test this hypothesis with an electrophysiological marker of inner speech.
… This study provides empirical support for the theory that AVH are related to abnormalities in the normative suppressive mechanisms associated with inner speech. This phenomenon of “inner speaking-induced suppression” may have utility as a biomarker for schizophrenia-spectrum disorders generally, and may index a tendency for AVH specifically at more extreme levels of abnormality. — Read More