On a Sunday evening earlier this month, a Stanford professor held a salon at her home near the university’s campus. The main topic for the event was “synthesizing consciousness through neuroscience,” and the home filled with dozens of people, including artificial intelligence researchers, doctors, neuroscientists, philosophers and a former monk, eager to discuss the current collision between new AI and biological tools and how we might identify the arrival of a digital consciousness.
The opening speaker for the salon was Sebastian Seung, and this made a lot of sense. Seung, a neuroscience and computer science professor at Princeton University, has spent much of the last year enjoying the afterglow of his (and others’) breakthrough research describing the inner workings of the fly brain. Seung, you see, helped create the first complete wiring diagram of a fly brain and its 140,000 neurons and 55 million synapses. (Nature put out a special issue last October to document the achievement and its implications.) This diagram, known as a connectome, took more than a decade to finish and stands as the most detailed look at the most complex whole brain ever produced.
… What Seung did not reveal to the audience is that the fly connectome has given rise to his own new neuroscience journey. This week, he’s unveiling a start-up called Memazing, as we can exclusively report. The new company seeks to create the technology needed to reverse engineer the fly brain (and eventually even more complex brains) and create full recreations – or emulations, as Seung calls them – of the brain in software. — Read More
Tag Archives: Human
If a Meta AI model can read a brain-wide signal, why wouldn’t the brain?
Did you know migratory birds and sea turtles are able to navigate using the Earth’s magnetic field? It’s called magnetoreception. Basically, being able to navigate was evolutionarily advantageous, so life evolved ways to feel the Earth’s magnetic field. A LOT of ways. Like a shocking amount of ways.
It would seem evolution adores detecting magnetic fields. And it makes sense! A literal “sense of direction” is quite useful in staying alive – nearly all life benefits from it, including us.
We don’t totally understand how our magnetoreception works yet, but we know that it does. In 2019, some Caltech researchers put some people in a room shielded from the Earth’s magnetic field, with a big magnetic field generator in it. They hooked them up to an EEG, and watched what happened in their brains as they manipulated the magnetic field. The result: some of those people showed a response to the magnetic fields on the EEG!
That gets my noggin joggin. Our brain responds to magnetic field changes, but we aren’t aware of it? What if it affects our mood? Would you believe me if I told you lunar gravity influences the Earth’s magnetosphere? Perhaps I was too dismissive of astrology. — Read More
Increasing alignment of large language models with language processing in the human brain
Transformer-based large language models (LLMs) have considerably advanced our understanding of how meaning is represented in the human brain; however, the validity of increasingly large LLMs is being questioned due to their extensive training data and their ability to access context thousands of words long. In this study we investigated whether instruction tuning—another core technique in recent LLMs that goes beyond mere scaling—can enhance models’ ability to capture linguistic information in the human brain. We compared base and instruction-tuned LLMs of varying sizes against human behavioral and brain activity measured with eye-tracking and functional magnetic resonance imaging during naturalistic reading. We show that simply making LLMs larger leads to a closer match with the human brain than fine-tuning them with instructions. These finding have substantial implications for understanding the cognitive plausibility of LLMs and their role in studying naturalistic language comprehension. — Read More
Deep Work in an Always-On World: How Focus Becomes Your Unfair Advantage
In an always-on environment of Slack pings, email floods, and meeting overload, the scarcest resource isn’t information or compute—it’s sustained human attention. This article argues that deep work—distraction-free, cognitively demanding, value-creating effort—is now core infrastructure for modern high performance. Drawing on research in attention, task switching, interruptions, and flow, it explains why “multitasking” is actually rapid context switching that slows delivery, increases defects, and spikes stress. It then connects focus to hard business outcomes: fewer incidents, faster recovery, better code, higher throughput, and improved retention. Practical sections translate the science into playbooks for individuals, teams, and leaders—covering how to measure deep work, protect maker time, fix meeting and communication norms, and overcome cultural resistance to being “less available.” The conclusion is simple: in an AI-heavy, always-on world, organizations that systematically protect deep work will ship better work, with saner teams, at lower real cost. — Read More
Scientists identify five ages of the human brain over a lifetime
Neuroscientists at the University of Cambridge have identified five “major epochs” of brain structure over the course of a human life, as our brains rewire to support different ways of thinking while we grow, mature, and ultimately decline.
A study led by Cambridge’s MRC Cognition and Brain Sciences Unit compared the brains of 3,802 people between zero and ninety years old using datasets of MRI diffusion scans, which map neural connections by tracking how water molecules move through brain tissue.
In a study published in Nature Communications, scientists say they detected five broad phases of brain structure in the average human life, split up by four pivotal “turning points” between birth and death when our brains reconfigure. — Read More
The Space of Intelligence is Large (Andrej Karpathy)
Something I think people continue to have poor intuition for: The space of intelligences is large and animal intelligence (the only kind we’ve ever known) is only a single point, arising from a very specific kind of optimization that is fundamentally distinct from that of our technology. — Read More
#humanContinuous Thought Machines
Biological brains demonstrate complex neural activity, where neural dynamics are critical to how brains process information. Most artificial neural networks ignore the complexity of individual neurons. We challenge that paradigm. By incorporating neuron-level processing and synchronization, we reintroduce neural timing as a foundational element. We present the Continuous Thought Machine (CTM), a model designed to leverage neural dynamics as its core representation. The CTM has two innovations: (1) neuron-level temporal processing, where each neuron uses unique weight parameters to process incoming histories; and (2) neural synchronization as a latent representation. The CTM aims to strike a balance between neuron abstractions and biological realism. It operates at a level of abstraction that effectively captures essential temporal dynamics while remaining computationally tractable. We demonstrate the CTM’s performance and versatility across a range of tasks, including solving 2D mazes, ImageNet-1K classification, parity computation, and more. Beyond displaying rich internal representations and offering a natural avenue for interpretation owing to its internal process, the CTM is able to perform tasks that require complex sequential reasoning. The CTM can also leverage adaptive compute, where it can stop earlier for simpler tasks, or keep computing when faced with more challenging instances. The goal of this work is to share the CTM and its associated innovations, rather than pushing for new state-of-the-art results. To that end, we believe the CTM represents a significant step toward developing more biologically plausible and powerful artificial intelligence systems. We provide an accompanying interactive online demonstration at this https URL and an extended technical report at this https URL . — Read More
Aging as a disease: The rise of longevity science
In October, the Roots of Progress Institute organized Progress Conference 2025 to connect people and ideas in the progress movement.
In this dispatch, medical historian Laura Mazer explores the conference’s longevity track, where researchers, economists, and entrepreneurs shared new ways to extend not just lifespan, but healthspan.
She finds that the frontier of medicine is shifting — from fighting disease to pursuing more life itself. — Read More
AI Turns Brain Scans Into Full Sentences and It’s Eerie To Say The Least
In a dark MRI scanner outside Tokyo, a volunteer watches a video of someone hurling themselves off a waterfall. Nearby, a computer digests the brain activity pulsing across millions of neurons. A few moments later, the machine produces a sentence: “A person jumps over a deep water fall on a mountain ridge.”
No one typed those words. No one spoke them. They came directly from the volunteer’s brain activity.
That’s the startling premise of “mind captioning,” a new method developed by Tomoyasu Horikawa and colleagues at NTT Communication Science Laboratories in Japan. Published this week in Science Advances, the system uses a blend of brain imaging and artificial intelligence to generate textual descriptions of what people are seeing — or even visualizing with their mind’s eye — based only on their neural patterns. — Read More
What’s up with Anthropic predicting AGI by early 2027?
As far as I’m aware, Anthropic is the only AI company with official AGI timelines[1]: they expect AGI by early 2027. In their recommendations (from March 2025) to the OSTP for the AI action plan they say:
As our CEO Dario Amodei writes in ‘Machines of Loving Grace’, we expect powerful AI systems will emerge in late 2026 or early 2027. Powerful AI systems will have the following properties:
Intellectual capabilities matching or exceeding that of Nobel Prize winners across most disciplines—including biology, computer science, mathematics, and engineering.
They often describe this capability level as a “country of geniuses in a datacenter”. — Read More