AI That Can Learn Cause-and-Effect: These Neural Networks Know What They’re Doing

A certain type of artificial intelligence agent can learn the cause-and-effect basis of a navigation task during training.

Neural networks can learn to solve all sorts of problems, from identifying cats in photographs to steering a self-driving car. But whether these powerful, pattern-recognizing algorithms actually understand the tasks they are performing remains an open question.

… Researchers at MIT have now shown that a certain type of neural network is able to learn the true cause-and-effect structure of the navigation task it is being trained to perform. Because these networks can understand the task directly from visual data, they should be more effective than other neural networks when navigating in a complex environment, like a location with dense trees or rapidly changing weather conditions. Read More

#human

A new brain-inspired intelligent system can drive a car using only 19 control neurons!

Read More
#human, #videos

Brud, Creators of Virtual Human Lil Miquela, Announce a New Direction

Five years have passed since Miquela Sousa made her first Instagram post. Since that historic day, her graphics and career have evolved, as she has continually defined and redefined what it means to be a virtual influencer. However, for half a decade the information about her managing company, Brud, has largely remained the same: a static Google document.

Anyone who has researched Miquela will recognize this screenshot of what Brud’s official website used to be. For years, this one-page Google document was the only accessible information on Miquela’s creators and content team.

Now, all of that has changed. Read More

#human

QNRs: Toward Language for Intelligent Machines

Impoverished syntax and nondifferentiable vocabularies make natural language a poor medium for neural representation learning and applications. Learned, quasilinguistic neural representations (QNRs) can upgrade words to embeddings and syntax to graphs to provide a more expressive and computationally tractable medium. Graph-structured, embedding-based quasilinguistic representations can support formal and informal reasoning, human and inter-agent communication, and the development of scalable quasilinguistic corpora with characteristics of both literatures and associative memory.

To achieve human-like intellectual competence, machines must be fully literate, able not only to read and learn, but to write things worth retaining as contributions to collective knowledge. In support of this goal, QNR-based systems could translate and process natural language corpora to support the aggregation, refinement, integration, extension, and application of knowledge at scale. Incremental development of QNR based models can build on current methods in neural machine learning, and as systems mature, could potentially complement or replace today’s opaque, error-prone “foundation models” with systems that are more capable, interpretable, and epistemically reliable. Potential applications and implications are broad. Read More

#human, #nlp

The world’s largest chip is creating AI networks larger than the human brain

Cerebras Systems, maker of the world’s largest chip, has lifted the lid on new architecture capable of supporting AI models that outscale the human brain.

The current largest AI models (such as Switch Transformer from Google) are built on circa 1 trillion parameters, which Cerebras suggests can be compared loosely to synapses in the brain, of which there are 100 trillion.

By harnessing a combination of technologies (and with the assistance of Wafer-Scale Engine 2 (WSE-2), the world’s largest chip), Cerebras has now created a single system capable of supporting AI models with more than 120 trillion parameters. Read More

#human, #nvidia

OpenAI Five, has started to defeat amateur human teams at Dota 2.

Our team of five neural networks, OpenAI Five, has started to defeat amateur human teams at Dota 2. While today we play with restrictions, we aim to beat a team of top professionals at The International in August subject only to a limited set of heroes. We may not succeed: Dota 2 is one of the most popular and complex esports games in the world, with creative and motivated professionals who train year-round to earn part of Dota’s annual $40M prize pool (the largest of any esports game).

OpenAI Five plays 180 years worth of games against itself every day, learning via self-play. It trains using a scaled-up version of Proximal Policy Optimization running on 256 GPUs and 128,000 CPU cores — a larger-scale version of the system we built to play the much-simpler solo variant of the game last year. Using a separate LSTM for each hero and no human data, it learns recognizable strategies. This indicates that reinforcement learning can yield long-term planning with large but achievable scale — without fundamental advances, contrary to our own expectations upon starting the project. Read More

#human

Brain connectivity can build better AI

Artificial neural networks modeled on real brains can perform cognitive tasks

By examining MRI data from a large Open Science repository, researchers reconstructed a brain connectivity pattern, and applied it to an artificial neural network (ANN). They trained the ANN to perform a cognitive memory task and observed how it worked to complete the assignment. These ‘neuromorphic’ neural networks were able to use the same underlying architecture to support a wide range of learning capacities across multiple contexts. Read More

#human

Neuroprosthesis for Decoding Speech in a Paralyzed Person with Anarthria

Technology to restore the ability to communicate in paralyzed persons who cannot speak has the potential to improve autonomy and quality of life. An approach that decodes words and sentences directly from the cerebral cortical activity of such patients may represent an advancement over existing methods for assisted communication.

Researchers implanted an array of 128 electrodes into the region of the brain responsible for movement of the mouth, lips, jaw, tongue, and larynx. They then trained a system to interpret electrical impulses into conversational phrases.  Read More

#human, #nlp

Enabling the ‘imagination’ of artificial intelligence

A team of researchers at USC is helping AI imagine the unseen, a technique that could also lead to fairer AI, new medicines and increased autonomous vehicle safety.

…as humans, it’s easy to envision an object with different attributes. But, despite advances in deep neural networks that match or surpass human performance in certain tasks, computers still struggle with the very human skill of “imagination.” Read More

#human

What could make AI conscious? with Wojciech Zaremba, co-founder of OpenAI

Read More
#human, #robotics, #podcasts, #videos