A memory prosthesis could restore memory in people with damaged brains

A unique form of brain stimulation appears to boost people’s ability to remember new information—by mimicking the way our brains create memories.

The “memory prosthesis,” which involves inserting an electrode deep into the brain, also seems to work in people with memory disorders—and is even more effective in people who had poor memory to begin with, according to new research. In the future, more advanced versions of the memory prosthesis could help people with memory loss due to brain injuries or as a result of aging or degenerative diseases like Alzheimer’s, say the researchers behind the work.

…It works by copying what happens in the hippocampus—a seahorse-shaped region deep in the brain that plays a crucial role in memory. The brain structure not only helps us form short-term memories but also appears to direct memories to other regions for long-term storage. Read More

#human

Alphabet CEO Sundar Pichai says ‘broken’ Google Voice assistant proves that A.I. isn’t sentient

Alphabet CEO Sundar Pichai said the company’s artificial intelligence technology is not anywhere near being sentient and may never get there, even as he touted A.I. as central to the $1.4 trillion company’s future.

“LaMDA is not sentient by any stretch of the imagination,” Pichai said during an onstage interview at Vox Media’s Code conference in Beverly Hills on Tuesday evening, referring to the name of one of Google’s A.I. technologies. Read More

#human

Using AI to decode speech from brain activity

Every year, more than 69 million people around the world suffer traumatic brain injury, which leaves many of them unable to communicate through speech, typing, or gestures. These people’s lives could dramatically improve if researchers developed a technology to decode language directly from noninvasive brain recordings. Today, we’re sharing research that takes a step toward this goal. We’ve developed an AI model that can decode speech from noninvasive recordings of brain activity.

From three seconds of brain activity, our results show that our model can decode the corresponding speech segments with up to 73 percent top-10 accuracy from a vocabulary of 793 words, i.e., a large portion of the words we typically use on a day-to-day basis.

Decoding speech from brain activity has been a long-standing goal of neuroscientists and clinicians, but most of the progress has relied on invasive brain-recording techniques, such as stereotactic electroencephalography and electrocorticography. These devices provide clearer signals than noninvasive methods but require neurosurgical interventions. While results from that work suggest that decoding speech from recordings of brain activity is feasible, decoding speech with noninvasive approaches would provide a safer, more scalable solution that could ultimately benefit many more people. This is very challenging, however, since noninvasive recordings are notoriously noisy and can greatly vary across recording sessions and individuals for a variety of reasons, including differences in each person’s brain and where the sensors are placed. Read More

Read the Paper

#human, #nlp

Self-Taught AI Shows Similarities to How the Brain Works

Self-supervised learning allows a neural network to figure out for itself what matters. The process might be what makes our own brains so successful.

For a decade now, many of the most impressive artificial intelligence systems have been taught using a huge inventory of labeled data. An image might be labeled “tabby cat” or “tiger cat,” for example, to “train” an artificial neural network to correctly distinguish a tabby from a tiger. The strategy has been both spectacularly successful and woefully deficient.

Such “supervised” training requires data laboriously labeled by humans, and the neural networks often take shortcuts, learning to associate the labels with minimal and sometimes superficial information. For example, a neural network might use the presence of grass to recognize a photo of a cow, because cows are typically photographed in fields.

“We are raising a generation of algorithms that are like undergrads [who] didn’t come to class the whole semester and then the night before the final, they’re cramming,” said Alexei Efros, a computer scientist at the University of California, Berkeley. “They don’t really learn the material, but they do well on the test.” Read More

#human, #self-supervised

A Path Towards Autonomous Machine Intelligence Version 0.9.2, 2022-06-27

How could machines learn as efficiently as humans and animals? How could machines learn to reason and plan? How could machines learn representations of percepts and action plans at multiple levels of abstraction, enabling them to reason, predict, and plan at multiple time horizons? This position paper proposes an architecture and training paradigms with which to construct autonomous intelligent agents. It combines concepts such as configurable predictive world model, behavior driven through intrinsic motivation, and hierarchical joint embedding architectures trained with self-supervised learning. Read More

#human

OFA: Unifying Architectures, Tasks, and Modalities Through a Simple Sequence-to-Sequence Learning Framework

In this work, we pursue a unified paradigm for multimodal pretraining to break the scaffolds of complex task/modality-specific customization. We propose OFA, a Task-Agnostic and Modality Agnostic framework that supports Task Comprehensiveness. OFA unifies a diverse set of cross modal and unimodal tasks, including image generation, visual grounding, image captioning, image classification, language modeling, etc., in a simple sequence-to-sequence learning framework. OFA follows the instruction-based learning in both pretraining and finetuning stages, requiring no extra task-specific layers for downstream tasks. In comparison with the recent state-of-the-art vision & language models that rely on extremely large cross-modal datasets, OFA is pretrained on only 20M publicly available image-text pairs. Despite its simplicity and relatively small-scale training data, OFA achieves new SOTAs in a series of cross-modal tasks while attaining highly competitive performances on uni-modal tasks. Our further analysis indicates that OFA can also effectively transfer to unseen tasks and unseen domains. Our code and models are publicly available at https://github.com/OFA-Sys/OFA. Read More

#human

Reading List for Topics in Multimodal Machine Learning

By Paul Liang (pliang@cs.cmu.edu), Machine Learning Department and Language Technologies InstituteCMU, with help from members of the MultiComp Lab at LTI, CMU. If there are any areas, papers, and datasets I missed, please let me know! Read More

#human

Towards artificial general intelligence via a multimodal foundation model

The fundamental goal of artificial intelligence (AI) is to mimic the core cognitive activities of human. Despite tremendous success in the AI research, most of existing methods have only single-cognitive ability. To overcome this limitation and take a solid step towards artificial general intelligence (AGI), we develop a foundation model pre-trained with huge multimodal data, which can be quickly adapted for various downstream cognitive tasks. To achieve this goal, we propose to pre-train our foundation model by self-supervised learning with weak semantic correlation data crawled from the Internet and show that promising results can be obtained on a wide range of downstream tasks. Particularly, with the developed model-interpretability tools, we demonstrate that strong imagination ability is now possessed by our foundation model. We believe that our work makes a transformative stride towards AGI, from our common practice of “weak or narrow AI” to that of “strong or generalized AI”. Read More

#human

Are babies the key to the next generation of artificial intelligence?

Babies can help unlock the next generation of artificial intelligence (AI), according to Trinity College neuroscientists and colleagues who have just published new guiding principles for improving AI.

The research, published today  in the journal Nature Machine Intelligence, examines the neuroscience and psychology of infant learning and distills three principles to guide the next generation of AI, which will help overcome the most pressing limitations of machine learning.

Dr Lorijn Zaadnoordijk, Marie Skłodowska-Curie Research Fellow at Trinity College explained:

“Artificial intelligence (AI) has made tremendous progress in the last decade, giving us smart speakers, autopilots in cars, ever-smarter apps, and enhanced medical diagnosis. These exciting developments in AI have been achieved thanks to machine learning which uses enormous datasets to train artificial neural network models. However, progress is stalling in many areas because the datasets that machines learn from must be painstakingly curated by humans. But we know that learning can be done much more efficiently, because infants don’t learn this way! They learn by experiencing the world around them, sometimes by even seeing something just once.” Read More

#human

DeepMind AI reacts to the physically impossible like a human infant

Adding assumptions about objects better than learning from scratch, claims researcher

DeepMind has looked to developmental psychology to help AI gain a basic understanding of the physical world.…

Real-world physics are for AIs to grasp when asked to start from scratch with only training data to guide them. But researchers have demonstrated babies as young as five months are surprised if they are shown a physically impossible event, such as a toy suddenly disappearing, implying they gain some intuitive physical understanding at an early age. Read More

#human