Consciousness in Artificial Intelligence: Insights from the Science of Consciousness

Whether current or near-term AI systems could be conscious is a topic of scientific interest and increasing public concern. This report argues for, and exemplifies, a rigorous and empirically grounded approach to AI consciousness: assessing existing AI systems in detail, in light of our best-supported neuroscientific theories of consciousness. We survey several prominent scientific theories of consciousness, including recurrent processing theory, global workspace theory, higher-order theories, predictive processing, and attention schema theory. From these theories we derive “indicator properties” of consciousness, elucidated in computational terms that allow us to assess AI systems for these properties. We use these indicator properties to assess several recent AI systems, and we discuss how future systems might implement them. Our analysis suggests that no current AI systems are conscious, but also suggests that there are no obvious technical barriers to building AI systems which satisfy these indicators. — Read More

#human

The Novel Written about—and with—Artificial Intelligence

THREE DISTINCT personalities, all female, walk into a bar together in Do You Remember Being Born?  and emerge with fat paycheques, a collaborative long poem slyly titled “Self-portrait,” and a lot of nagging doubt. Actually, the proverbial bar in Sean Michaels’s dizzying new novel is not a bar but the Mind Studio, an entry-by-key-card-and-retina-scan-only room on an unnamed tech giant’s San Francisco campus. And one of the three personalities, a “2.5-trillion-parameter neural network” named Charlotte, is better described as feminine than female. But the doubt, tucked under a lot of surface-level optimism, is real, instilled in characters and readers alike by the author. — Read More

#human

Large language models aren’t people. Let’s stop testing them as if they were.

When Taylor Webb played around with GPT-3 in early 2022, he was blown away by what OpenAI’s large language model appeared to be able to do. Here was a neural network trained only to predict the next word in a block of text—a jumped-up autocomplete. And yet it gave correct answers to many of the abstract problems that Webb set for it—the kind of thing you’d find in an IQ test. “I was really shocked by its ability to solve these problems,” he says. “It completely upended everything I would have predicted.”

Webb is a psychologist at the University of California, Los Angeles, who studies the different ways people and computers solve abstract problems. He was used to building neural networks that had specific reasoning capabilities bolted on. But GPT-3 seemed to have learned them for free.

… What Webb’s research highlights is only the latest in a long string of remarkable tricks pulled off by large language models.

… These kinds of results are feeding a hype machine predicting that these machines will soon come for white-collar jobs, replacing teachers, doctors, journalists, and lawyers. …But there’s a problem: there is little agreement on what those results really mean.  — Read More

#human

An analog-AI chip for energy-efficient speech recognition and transcription

Models of artificial intelligence (AI) that have billions of parameters can achieve high accuracy across a range of tasks1,2, but they exacerbate the poor energy efficiency of conventional general-purpose processors, such as graphics processing units or central processing units. Analog in-memory computing (analog-AI)3,4,5,6,7 can provide better energy efficiency by performing matrix–vector multiplications in parallel on ‘memory tiles’. However, analog-AI has yet to demonstrate software-equivalent (SWeq) accuracy on models that require many such tiles and efficient communication of neural-network activations between the tiles. Here we present an analog-AI chip that combines 35 million phase-change memory devices across 34 tiles, massively parallel inter-tile communication and analog, low-power peripheral circuitry that can achieve up to 12.4 tera-operations per second per watt (TOPS/W) chip-sustained performance. We demonstrate fully end-to-end SWeq accuracy for a small keyword-spotting network and near-SWeq accuracy on the much larger MLPerf8 recurrent neural-network transducer (RNNT), with more than 45 million weights mapped onto more than 140 million phase-change memory devices across five chips. — Read More

#nvidia, #human

A high-performance speech neuroprosthesis

Speech brain–computer interfaces (BCIs) have the potential to restore rapid communication to people with paralysis by decoding neural activity evoked by attempted speech into text1,2 or sound3,4. Early demonstrations, although promising, have not yet achieved accuracies sufficiently high for communication of unconstrained sentences from a large vocabulary1,2,3,4,5,6,7. Here we demonstrate a speech-to-text BCI that records spiking activity from intracortical microelectrode arrays. Enabled by these high-resolution recordings, our study participant—who can no longer speak intelligibly owing to amyotrophic lateral sclerosis—achieved a 9.1% word error rate on a 50-word vocabulary (2.7 times fewer errors than the previous state-of-the-art speech BCI2) and a 23.8% word error rate on a 125,000-word vocabulary (the first successful demonstration, to our knowledge, of large-vocabulary decoding). Our participant’s attempted speech was decoded  at 62 words per minute, which is 3.4 times as fast as the previous record8 and begins to approach the speed of natural conversation (160 words per minute9). Finally, we highlight two aspects of the neural code for speech that are encouraging for speech BCIs: spatially intermixed tuning to speech articulators that makes accurate decoding possible from only a small region of cortex, and a detailed articulatory representation of phonemes that persists years after paralysis. These results show a feasible path forward for restoring rapid communication to people with paralysis who can no longer speak. — Read More

#human

Largest genetic study of brain structure identifies how the brain is organised

The largest ever study of the genetics of the brain – encompassing some 36,000 brain scans – has identified more than 4,000 genetic variants linked to brain structure. The results of the study, led by researchers at the University of Cambridge, are published in Nature Genetics today.

Our brains are very complex organs, with huge variety between individuals in terms of the overall volume of the brain, how it is folded and how thick these folds are. Little is known about how our genetic make-up shapes the development of the brain.

… [F]indings have allowed researchers to confirm and, in some cases, identify, how different properties of the brain are genetically linked to each other. — Read More

#human

Does AI Understand the World?

Do large language models understand the world? As a scientist and engineer, I’ve avoided asking whether an AI system “understands” anything. There’s no widely agreed-upon, scientific test for whether a system really understands — as opposed to appearing to understand — just as no such tests exist for consciousness or sentience, as I discussed in an earlier letter. This makes the question of understanding a matter of philosophy rather than science. But with this caveat, I believe that LLMs build sufficiently complex models of the world that I feel comfortable saying that, to some extent, they do understand the world. — Read More

Conversation with Geoff Hinton

#human

Reconstructing the Mind’s Eye: fMRI-to-image with Contrastive Learning and Diffusion Priors

#human

Is Consciousness Real? 

Read More

#human, #videos

I Wore the Future With a Brain-Connected AR-VR Headset

The next frontier might be neurotech: OpenBCI’s Galea headset, along with advances in assistive controls, points to a wild, wearable road ahead.

A few weeks ago, I saw the best quality mixed reality headset with an interface controlled using my fingers and eyes: Apple’s Vision Pro. But a few months before its announcement, I saw something perhaps even wilder. Clips on my ears, a crown of rubbery-tipped sensors nestled into my hair and a face mask lowered in front of my eyes. Suddenly I was looking at my own brain waves in VR and moving things around with only tiny movements of my facial muscles. I was test driving OpenBCI’s Galea.

The future of VR and AR is advancing steadily, but inputs remain a challenge. For now, it’s a territory moving from physical controllers to hand- and eye-tracking. But there are deeper possibilities beyond that, and they’re neural.  — Read More

#human