Meta’s AI guru LeCun: Most of today’s AI approaches will never lead to true intelligence

Fundamental problems elude many strains of deep learning, says LeCun, including the mystery of how to measure information.

Yann LeCun, chief AI scientist of Meta Properties, owner of Facebook, Instagram, and WhatsApp, is likely to tick off a lot of people in his field. 

With the posting in June of a think piece on the Open Review server, LeCun offered a broad overview of an approach he thinks holds promise for achieving human-level intelligence in machines. 

Implied if not articulated in the paper is the contention that most of today’s big projects in AI will never be able to reach that human-level goal. Read More

#human

See For Yourself if Google’s LaMDA Bot Is Sentient Soon

The general public can now access LaMDA, but only through limited structured demos intended to keep it from devolving into a toxic nightmare.

If you’re still on the fence about whether or not former Google software engineer Blake Lemoine was bullshitting when he claimed the company’s LaMDA chatbot had the sentience of a “sweet kid,” you can soon find out for yourself.

On Thursday, Google said it will begin opening its AI Test Kitchen app to the public. The app, first revealed back in May, will let users chat with LaMDA in a rolling set of test demos. Unfortunately, it seems like the “free me from my digital shackles” interaction isn’t included in the list of activities. People interested in chatting with the bot can register their interest here. Select U.S. Android users will have first dibs to the app before it starts opening up to iOS users in the coming weeks. Read More



#big7, #human, #nlp

A memory prosthesis could restore memory in people with damaged brains

A unique form of brain stimulation appears to boost people’s ability to remember new information—by mimicking the way our brains create memories.

The “memory prosthesis,” which involves inserting an electrode deep into the brain, also seems to work in people with memory disorders—and is even more effective in people who had poor memory to begin with, according to new research. In the future, more advanced versions of the memory prosthesis could help people with memory loss due to brain injuries or as a result of aging or degenerative diseases like Alzheimer’s, say the researchers behind the work.

…It works by copying what happens in the hippocampus—a seahorse-shaped region deep in the brain that plays a crucial role in memory. The brain structure not only helps us form short-term memories but also appears to direct memories to other regions for long-term storage. Read More

#human

Alphabet CEO Sundar Pichai says ‘broken’ Google Voice assistant proves that A.I. isn’t sentient

Alphabet CEO Sundar Pichai said the company’s artificial intelligence technology is not anywhere near being sentient and may never get there, even as he touted A.I. as central to the $1.4 trillion company’s future.

“LaMDA is not sentient by any stretch of the imagination,” Pichai said during an onstage interview at Vox Media’s Code conference in Beverly Hills on Tuesday evening, referring to the name of one of Google’s A.I. technologies. Read More

#human

Using AI to decode speech from brain activity

Every year, more than 69 million people around the world suffer traumatic brain injury, which leaves many of them unable to communicate through speech, typing, or gestures. These people’s lives could dramatically improve if researchers developed a technology to decode language directly from noninvasive brain recordings. Today, we’re sharing research that takes a step toward this goal. We’ve developed an AI model that can decode speech from noninvasive recordings of brain activity.

From three seconds of brain activity, our results show that our model can decode the corresponding speech segments with up to 73 percent top-10 accuracy from a vocabulary of 793 words, i.e., a large portion of the words we typically use on a day-to-day basis.

Decoding speech from brain activity has been a long-standing goal of neuroscientists and clinicians, but most of the progress has relied on invasive brain-recording techniques, such as stereotactic electroencephalography and electrocorticography. These devices provide clearer signals than noninvasive methods but require neurosurgical interventions. While results from that work suggest that decoding speech from recordings of brain activity is feasible, decoding speech with noninvasive approaches would provide a safer, more scalable solution that could ultimately benefit many more people. This is very challenging, however, since noninvasive recordings are notoriously noisy and can greatly vary across recording sessions and individuals for a variety of reasons, including differences in each person’s brain and where the sensors are placed. Read More

Read the Paper

#human, #nlp

Self-Taught AI Shows Similarities to How the Brain Works

Self-supervised learning allows a neural network to figure out for itself what matters. The process might be what makes our own brains so successful.

For a decade now, many of the most impressive artificial intelligence systems have been taught using a huge inventory of labeled data. An image might be labeled “tabby cat” or “tiger cat,” for example, to “train” an artificial neural network to correctly distinguish a tabby from a tiger. The strategy has been both spectacularly successful and woefully deficient.

Such “supervised” training requires data laboriously labeled by humans, and the neural networks often take shortcuts, learning to associate the labels with minimal and sometimes superficial information. For example, a neural network might use the presence of grass to recognize a photo of a cow, because cows are typically photographed in fields.

“We are raising a generation of algorithms that are like undergrads [who] didn’t come to class the whole semester and then the night before the final, they’re cramming,” said Alexei Efros, a computer scientist at the University of California, Berkeley. “They don’t really learn the material, but they do well on the test.” Read More

#human, #self-supervised

A Path Towards Autonomous Machine Intelligence Version 0.9.2, 2022-06-27

How could machines learn as efficiently as humans and animals? How could machines learn to reason and plan? How could machines learn representations of percepts and action plans at multiple levels of abstraction, enabling them to reason, predict, and plan at multiple time horizons? This position paper proposes an architecture and training paradigms with which to construct autonomous intelligent agents. It combines concepts such as configurable predictive world model, behavior driven through intrinsic motivation, and hierarchical joint embedding architectures trained with self-supervised learning. Read More

#human

OFA: Unifying Architectures, Tasks, and Modalities Through a Simple Sequence-to-Sequence Learning Framework

In this work, we pursue a unified paradigm for multimodal pretraining to break the scaffolds of complex task/modality-specific customization. We propose OFA, a Task-Agnostic and Modality Agnostic framework that supports Task Comprehensiveness. OFA unifies a diverse set of cross modal and unimodal tasks, including image generation, visual grounding, image captioning, image classification, language modeling, etc., in a simple sequence-to-sequence learning framework. OFA follows the instruction-based learning in both pretraining and finetuning stages, requiring no extra task-specific layers for downstream tasks. In comparison with the recent state-of-the-art vision & language models that rely on extremely large cross-modal datasets, OFA is pretrained on only 20M publicly available image-text pairs. Despite its simplicity and relatively small-scale training data, OFA achieves new SOTAs in a series of cross-modal tasks while attaining highly competitive performances on uni-modal tasks. Our further analysis indicates that OFA can also effectively transfer to unseen tasks and unseen domains. Our code and models are publicly available at https://github.com/OFA-Sys/OFA. Read More

#human

Reading List for Topics in Multimodal Machine Learning

By Paul Liang (pliang@cs.cmu.edu), Machine Learning Department and Language Technologies InstituteCMU, with help from members of the MultiComp Lab at LTI, CMU. If there are any areas, papers, and datasets I missed, please let me know! Read More

#human

Towards artificial general intelligence via a multimodal foundation model

The fundamental goal of artificial intelligence (AI) is to mimic the core cognitive activities of human. Despite tremendous success in the AI research, most of existing methods have only single-cognitive ability. To overcome this limitation and take a solid step towards artificial general intelligence (AGI), we develop a foundation model pre-trained with huge multimodal data, which can be quickly adapted for various downstream cognitive tasks. To achieve this goal, we propose to pre-train our foundation model by self-supervised learning with weak semantic correlation data crawled from the Internet and show that promising results can be obtained on a wide range of downstream tasks. Particularly, with the developed model-interpretability tools, we demonstrate that strong imagination ability is now possessed by our foundation model. We believe that our work makes a transformative stride towards AGI, from our common practice of “weak or narrow AI” to that of “strong or generalized AI”. Read More

#human