Minds of machines: The great AI consciousness conundrum

David Chalmers was not expecting the invitation he received in September of last year. As a leading authority on consciousness, Chalmers regularly circles the world delivering talks at universities and academic meetings to rapt audiences of philosophers—the sort of people who might spend hours debating whether the world outside their own heads is real and then go blithely about the rest of their day. This latest request, though, came from a surprising source: the organizers of the Conference on Neural Information Processing Systems (NeurIPS), a yearly gathering of the brightest minds in artificial intelligence. 

… Chalmers was an eminently sensible choice to speak about AI consciousness. He’d earned his PhD in philosophy at an Indiana University AI lab, where he and his computer scientist colleagues spent their breaks debating whether machines might one day have minds. In his 1996 book, The Conscious Mind, he spent an entire chapter arguing that artificial consciousness was possible. 

If he had been able to interact with systems like LaMDA and ChatGPT back in the ’90s, before anyone knew how such a thing might work, he would have thought there was a good chance they were conscious, Chalmers says. But when he stood before a crowd of NeurIPS attendees in a cavernous New Orleans convention hall, clad in his trademark leather jacket, he offered a different assessment. Yes, large language models—systems that have been trained on enormous corpora of text in order to mimic human writing as accurately as possible—are impressive. But, he said, they lack too many of the potential requisites for consciousness for us to believe that they actually experience the world. — Read More

#human

This is the largest map of the human brain ever made

Researchers have created the largest atlas of human brain cells so far, revealing more than 3,000 cell types — many of which are new to science. The work, published in a package of 21 papers today in ScienceScience Advances and Science Translational Medicine, will aid the study of diseases, cognition and what makes us human, among other things, say the authors.

The enormous cell atlas offers a detailed snapshot of the most complex known organ. “It’s highly significant,” says Anthony Hannan, a neuroscientist at the Florey Institute of Neuroscience and Mental Health in Melbourne, Australia. Researchers have previously mapped the human brain using techniques such as magnetic resonance imaging, but this is the first atlas of the whole human brain at the single-cell level, showing its intricate molecular interactions, adds Hannan. “These types of atlases really are laying the groundwork for a much better understanding of the human brain.” — Read More

#human

Auto-Regressive Next-Token Predictors are Universal Learners

Large language models display remarkable capabilities in logical and mathematical reasoning, allowing them to solve complex tasks. Interestingly, these abilities emerge in networks trained on the simple task of next-token prediction. In this work, we present a theoretical framework for studying auto-regressive next-token predictors. We demonstrate that even simple models such as linear next-token predictors, trained on Chain-of-Thought (CoT) data, can approximate any function efficiently computed by a Turing machine. We introduce a new complexity measure — length complexity — which measures the number of intermediate tokens in a CoT sequence required to approximate some target function, and analyze the interplay between length complexity and other notions of complexity. Finally, we show experimentally that simple next-token predictors, such as linear networks and shallow Multi-Layer Perceptrons (MLPs), display non-trivial performance on text generation and arithmetic tasks. Our results demonstrate that the power of language models can be attributed, to a great extent, to the auto-regressive next-token training scheme, and not necessarily to a particular choice of architecture. — Read More

#human

A Lab Just 3D-Printed a Neural Network of Living Brain Cells

YOU CAN 3D-PRINT nearly anything: rocketsmouse ovaries, and for some reason, lamps made of orange peels. Now, scientists at Monash University in Melbourne, Australia, have printed living neural networks composed of rat brain cells that seem to mature and communicate like real brains do.

Researchers want to create mini-brains partly because they could someday offer a viable alternative to animal testing in drug trials and studies of basic brain function.  …3D-printing is just one entry in the race to build a better mini-brain.  …With 3D-printing, researchers can culture cells in specific patterns on top of recording electrodes, granting them a degree of experimental control normally reserved for flat cell cultures. But because the structure is soft enough to allow cells to migrate and reorganize themselves in 3D space, it gains some of the advantages of the organoid approach, more closely mimicking the structure of normal tissue. — Read More

#human

What Can AI Decode From Human Brain Activity?

Research exploring the capabilities of artificial intelligence (AI) to try and interpret and translate brain activity has been popping up more and more lately.

By using neuroimaging data and AI models, recent studies have explored AI’s ability to decode brain activity and reconstruct the images seen by individuals, the sounds heard, or even the stories imagined, by generating comparable images, streams of text, and even tunes.  — Read More

#human

A foundation model for generalizable disease detection from retinal images

Medical artificial intelligence (AI) offers great potential for recognizing signs of health conditions in retinal images and expediting the diagnosis of eye diseases and systemic disorders1. However, the development of AI models requires substantial annotation and models are usually task-specific with limited generalizability to different clinical applications2. Here, we present RETFound, a foundation model for retinal images that learns generalizable representations from unlabelled retinal images and provides a basis for label-efficient model adaptation in several applications. Specifically, RETFound is trained on 1.6 million unlabelled retinal images by means of self-supervised learning and then adapted to disease detection tasks with explicit labels. We show that adapted RETFound consistently outperforms several comparison models in the diagnosis and prognosis of sight-threatening eye diseases, as well as incident prediction of complex systemic disorders such as heart failure and myocardial infarction with fewer labelled data. RETFound provides a generalizable solution to improve model performance and alleviate the annotation workload of experts to enable broad clinical AI applications from retinal imaging. — Read More

#human

Consciousness in Artificial Intelligence: Insights from the Science of Consciousness

Whether current or near-term AI systems could be conscious is a topic of scientific interest and increasing public concern. This report argues for, and exemplifies, a rigorous and empirically grounded approach to AI consciousness: assessing existing AI systems in detail, in light of our best-supported neuroscientific theories of consciousness. We survey several prominent scientific theories of consciousness, including recurrent processing theory, global workspace theory, higher-order theories, predictive processing, and attention schema theory. From these theories we derive “indicator properties” of consciousness, elucidated in computational terms that allow us to assess AI systems for these properties. We use these indicator properties to assess several recent AI systems, and we discuss how future systems might implement them. Our analysis suggests that no current AI systems are conscious, but also suggests that there are no obvious technical barriers to building AI systems which satisfy these indicators. — Read More

#human

The Novel Written about—and with—Artificial Intelligence

THREE DISTINCT personalities, all female, walk into a bar together in Do You Remember Being Born?  and emerge with fat paycheques, a collaborative long poem slyly titled “Self-portrait,” and a lot of nagging doubt. Actually, the proverbial bar in Sean Michaels’s dizzying new novel is not a bar but the Mind Studio, an entry-by-key-card-and-retina-scan-only room on an unnamed tech giant’s San Francisco campus. And one of the three personalities, a “2.5-trillion-parameter neural network” named Charlotte, is better described as feminine than female. But the doubt, tucked under a lot of surface-level optimism, is real, instilled in characters and readers alike by the author. — Read More

#human

Large language models aren’t people. Let’s stop testing them as if they were.

When Taylor Webb played around with GPT-3 in early 2022, he was blown away by what OpenAI’s large language model appeared to be able to do. Here was a neural network trained only to predict the next word in a block of text—a jumped-up autocomplete. And yet it gave correct answers to many of the abstract problems that Webb set for it—the kind of thing you’d find in an IQ test. “I was really shocked by its ability to solve these problems,” he says. “It completely upended everything I would have predicted.”

Webb is a psychologist at the University of California, Los Angeles, who studies the different ways people and computers solve abstract problems. He was used to building neural networks that had specific reasoning capabilities bolted on. But GPT-3 seemed to have learned them for free.

… What Webb’s research highlights is only the latest in a long string of remarkable tricks pulled off by large language models.

… These kinds of results are feeding a hype machine predicting that these machines will soon come for white-collar jobs, replacing teachers, doctors, journalists, and lawyers. …But there’s a problem: there is little agreement on what those results really mean.  — Read More

#human

An analog-AI chip for energy-efficient speech recognition and transcription

Models of artificial intelligence (AI) that have billions of parameters can achieve high accuracy across a range of tasks1,2, but they exacerbate the poor energy efficiency of conventional general-purpose processors, such as graphics processing units or central processing units. Analog in-memory computing (analog-AI)3,4,5,6,7 can provide better energy efficiency by performing matrix–vector multiplications in parallel on ‘memory tiles’. However, analog-AI has yet to demonstrate software-equivalent (SWeq) accuracy on models that require many such tiles and efficient communication of neural-network activations between the tiles. Here we present an analog-AI chip that combines 35 million phase-change memory devices across 34 tiles, massively parallel inter-tile communication and analog, low-power peripheral circuitry that can achieve up to 12.4 tera-operations per second per watt (TOPS/W) chip-sustained performance. We demonstrate fully end-to-end SWeq accuracy for a small keyword-spotting network and near-SWeq accuracy on the much larger MLPerf8 recurrent neural-network transducer (RNNT), with more than 45 million weights mapped onto more than 140 million phase-change memory devices across five chips. — Read More

#nvidia, #human