Deciphering language processing in the human brain through LLM representations

Large Language Models (LLMs) optimized for predicting subsequent utterances and adapting to tasks using contextual embeddings can process natural language at a level close to human proficiency. This study shows that neural activity in the human brain aligns linearly with the internal contextual embeddings of speech and language within large language models (LLMs) as they process everyday conversations.

How does the human brain process natural language during everyday conversations? Theoretically, large language models (LLMs) and symbolic psycholinguistic models of human language provide a fundamentally different computational framework for coding natural language. Large language models do not depend on symbolic parts of speech or syntactic rules. Instead, they utilize simple self-supervised objectives, such as next-word prediction and generation enhanced by reinforcement learning. This allows them to produce context-specific linguistic outputs drawn from real-world text corpora, effectively encoding the statistical structure of natural speech (sounds) and language (words) into a multidimensional embedding space.

Inspired by the success of LLMs, our team at Google Research, in collaboration with Princeton UniversityNYU, and HUJI, sought to explore the similarities and differences in how the human brain and deep language models process natural language to achieve their remarkable capabilities. Through a series of studies over the past five years, we explored the similarity between the internal representations (embeddings) of specific deep learning models and human brain neural activity during natural free-flowing conversations, demonstrating the power of deep language model’s embeddings to act as a framework for understanding how the human brain processes language. We demonstrate that the word-level internal embeddings generated by deep language models align with the neural activity patterns in established brain regions associated with speech comprehension and production in the human brain. — Read More

#human

Brain-to-Text Decoding: A Non-invasive Approach via Typing

Modern neuroprostheses can now restore communication in patients who have lost the ability to speak or move. However, these invasive devices entail risks inherent to neurosurgery. Here, we introduce a non-invasive method to decode the production of sentences from brain activity and demonstrate its efficacy in a cohort of 35 healthy volunteers. For this, we present Brain2Qwerty, a new deep learning architecture trained to decode sentences from either electro- (EEG) or magneto-encephalography (MEG), while participants typed briefly memorized sentences on a QWERTY keyboard. With MEG, Brain2Qwerty reaches, on average, a character-error-rate (CER) of 32% and substantially outperforms EEG (CER: 67%). For the best participants, the model achieves a CER of 19%, and can perfectly decode a variety of sentences outside of the training set. While error analyses suggest that decoding depends on motor processes, the analysis of typographical errors suggests that it also involves higher- level cognitive factors. Overall, these results narrow the gap between invasive and non-invasive methods and thus open the path for developing safe brain-computer interfaces for non-communicating patients. — Read More

#human

Meta Appears to Have Invented a Device Allowing You to Type With Your Brain

Mark Zuckerberg’s Meta says it’s created a device that lets you produce text simply by thinking what you want to say.

As detailed in a pair of studies released by Meta last week, researchers used a state-of-the-art brain scanner and a deep learning AI model to interpret the neural signals of people while they typed, guessing what keys they were hitting with an accuracy high enough to allow them to reconstruct entire sentences.  — Read More

#human

Genetic Algorithm Runs On Atari 800 XL

For the last few years or so, the story in the artificial intelligence that was accepted without question was that all of the big names in the field needed more compute, more resources, more energy, and more money to build better models. But simply throwing money and GPUs at these companies without question led to them getting complacent, and ripe to be upset by an underdog with fractions of the computing resources and funding. Perhaps that should have been more obvious from the start, since people have been building various machine learning algorithms on extremely limited computing platforms like this one built on the Atari 800 XL.

Unlike other models that use memory-intensive applications like gradient descent to train their neural networks, [Jean Michel Sellier] is using a genetic algorithm to work within the confines of the platform. Genetic algorithms evaluate potential solutions by evolving them over many generations and keeping the ones which work best each time. The changes made to the surviving generations before they are put through the next evolution can be made in many ways, but for a limited system like this a quick approach is to make small random changes. — Read More

#human

Brain implant that could boost mood by using ultrasound to go under NHS trial

A groundbreaking NHS trial will attempt to boost patients’ mood using a brain-computer-interface that directly alters brain activity using ultrasound.

The device, which is designed to be implanted beneath the skull but outside the brain, maps activity and delivers targeted pulses of ultrasound to “switch on” clusters of neurons. Its safety and tolerability will be tested on about 30 patient in the £6.5m trial, funded by the UK’s Advanced Research and Invention Agency (Aria).

In future, doctors hope the technology could revolutionise the treatment of conditions such as depression, addiction, OCD and epilepsy by rebalancing disrupted patterns of brain activity. — Read More

#human

AI researcher François Chollet founds a new AI lab focused on AGI

François Chollet, an influential AI researcher, is launching a new startup that aims to build frontier AI systems with novel designs.

The startup, Ndea, will consist of an AI research and science lab. It’s looking to “develop and operationalize” AGI. AGI, which stands for “artificial general intelligence,” typically refers to AI that can perform any task a human can. It’s a goalpost for many AI companies, including OpenAI.

… Ndea plans to use a technique called program synthesis, in tandem with other technical approaches, to unlock AGI.  — Read More

#human

What to expect from Neuralink in 2025

In November, a young man named Noland Arbaugh announced he’d be livestreaming from his home for three days straight. His broadcast was in some ways typical fare: a backyard tour, video games, meet mom.

The difference is that Arbaugh, who is paralyzed, has thin electrode-studded wires installed in his brain, which he used to move a computer mouse on a screen, click menus, and play chess. The implant, called N1, was installed last year by neurosurgeons working with Neuralink, Elon Musk’s brain-interface company.

The possibility of listening to neurons and using their signals to move a computer cursor was first demonstrated more than 20 years ago in a lab setting. Now, Arbaugh’s livestream is an indicator that Neuralink is a whole lot closer to creating a plug-and-play experience that can restore people’s daily ability to roam the web and play games, giving them what the company has called “digital freedom.”

But this is not yet a commercial product.  — Read More

#human

How should we test AI for human-level intelligence? OpenAI’s o3 electrifies quest

The technology firm OpenAI made headlines last month when its latest experimental chatbot model, o3, achieved a high score on a test that marks progress towards artificial general intelligence (AGI). OpenAI’s o3 scored 87.5%, trouncing the previous best score for an artificial intelligence (AI) system of 55.5%.

This is “a genuine breakthrough”, says AI researcher François Chollet, who created the test, called Abstraction and Reasoning Corpus for Artificial General Intelligence (ARC-AGI)1, in 2019 while working at Google, based in Mountain View, California. A high score on the test doesn’t mean that AGI — broadly defined as a computing system that can reason, plan and learn skills as well as humans can — has been achieved, Chollet says, but o3 is “absolutely” capable of reasoning and “has quite substantial generalization power”.

Researchers are bowled over by o3’s performance across a variety of tests, or benchmarks, including the extremely difficult FrontierMath test, announced in November by the virtual research institute Epoch AI.  …But many, including Rein, caution that it’s hard to tell whether the ARC-AGI test really measures AI’s capacity to reason and generalize. “ — Read More

#human

It’s AI Versus the World’s Largest Tuberculosis Epidemic 

The scourge of tuberculosis (TB) may be largely a distant memory for most Americans and Europeans, but it killed roughly 1.25 million people last year around the world. A non-profit based in India, which accounts for more than a quarter of all cases, is developing AI tools that could boost efforts to eradicate the disease.

Roughly 10 million people a year fall ill with TB, making it one of the world’s most prevalent infectious diseases. In 2018, Indian Prime Minister Narendra Modi made an ambitious pledge to eliminate TB in India by 2025. With 2.5 million cases recorded in India last year, that goal clearly won’t be met; still, the country has invested hundreds of millions of dollars in a vast national TB program, and has reduced the disease’s incidence by about 18 percent between 2015 and 2023.

… Indian non-profit Wadhwani AI has developed a suite of AI-powered tools to assist health workers detect undiagnosed cases, decide on treatment plans, and prevent people from dropping out of treatment. Working with the Indian government and the U.S. Agency for International Development, the organization is currently piloting these tools across the country. And Wadhwani’s director of solutions, Nakul Jain, says 2025 could see several incorporated into India’s national TB patient management system, Nikshay. — Read More

#human

This “Lollipop” Brings Taste to Virtual Reality

Virtual- and augmented-reality setups already modify the way users see and hear the world around them. Add in haptic feedback for a sense of touch and a VR version of Smell-O-Vision, and only one major sense remains: taste.

To fill the gap, researchers at the City University of Hong Kong have developed a new interface to simulate taste in virtual and other extended reality (XR). The group previously worked on other systems for wearable interfaces, such as haptic and olfactory feedback. To create a more “immersive VR experience,” they turned to adding taste sensations, says Yiming Liu, a coauthor of the group’s research paper published today in the Proceedings of the National Academy of Sciences. — Read More

#human