A Standard Model of the Mind: Toward a Common Computational Framework across Artificial Intelligence, Cognitive Science, Neuroscience, and Robotics

A standard model captures a community consensus over a coherent region of science, serving as a cumulative reference point for the field that can provide guidance for both research and applications, while also focusing efforts to extend or revise it. Here we propose developing such a model for human like minds, computational entities whose structures and processes are substantially similar to those found in human cognition. Our hypothesis is that cognitive architectures provide the appropriate computational abstraction for defining a standard model, although the standard model is not itself such an architecture. The proposed standard model began as an initial consensus at the 2013 AAAI Fall Symposium on Integrated Cognition, but is extended here via a synthesis across three existing cognitive architectures: ACT-R, Sigma, and Soar. The resulting standard model spans key aspects of structure and processing, memory and content, learning, and perception and motor; highlighting loci of architectural agreement as well as disagreement with the consensus while identifying potential areas of remaining incompleteness. The hope is that this work will provide an important step towards engaging the broader community in further development of the standard model of the mind. Read More

#human

The evolution of cognitive architecture will deliver human-like AI

There’s no one right way to build a robot, just as there’s no singular means of imparting it with intelligence. Last month, Engadget spoke withCarnegie Mellon University associate research professor and the director of the Resilient Intelligent Systems Lab, Nathan Michael, whose work involves stacking and combining a robot’s various piecemeal capabilities together as it learns them into an amalgamated artificial general intelligence (AGI). Think, a Roomba that learns how to vacuum, then learns how to mop, then learns how to dust and do dishes — pretty soon, you’ve got Rosie from The Jetsons.

But attempting to model an intelligence after either the ephemeral human mind or the exact physical structure of the brain (rather than iterating increasingly capable Roombas) is no small task — and with no small amount of competing hypotheses and models to boot. In fact, a 2010 survey of the field found more than two dozen such cognitive architectures actively being studied. Read More

#human, #singularity

World Models

We explore building generative neural network models of popular reinforcement learning environments. Our world model can be trained quickly in an unsupervised manner to learn a compressed spatial and temporal representation of the environment. By using features extracted from the world model as inputs to an agent, we can train a very compact and simple policy that can solve the required task. We can even train our agent entirely inside of its own hallucinated dream generated by its world model, and transfer this policy back into the actual environment. Read More

#gans, #human, #reinforcement-learning

AlphaStar: An Evolutionary Computation Perspective

In January 2019, DeepMind revealed AlphaStar to the world—the first artificial intelligence (AI) system to beat a professional player at the game of StarCraft II—representing a milestone in the progress of AI. AlphaStar draws on many areas of AI research, including deep learning, reinforcement learning, game theory, and evolutionary computation (EC). In this paper we analyze AlphaStar primarily through the lens of EC, presenting a new look at the system and relating it to many concepts in the field. We highlight some of its most interesting aspects—the use of Lamarckian evolution,competitive co-evolution, and quality diversity. In doing so,we hope to provide a bridge between the wider EC community and one of the most significant AI systems developed in recent times. Read More

#human, #reinforcement-learning

The Power of Self-Learning Systems

Read More

#deep-learning, #human, #reinforcement-learning, #videos

AI Codes its Own ‘AI Child’ – AutoML

Read More

#deep-learning, #human, #reinforcement-learning, #videos

Rock Paper Scissors robot wins 100% of the time

The newest version of a robot from Japanese researchers can not only challenge the best human players in a game of Rock Paper Scissors, but it can beat them — 100% of the time. In reality, the robot uses a sophisticated form a cheating which both breaks the game itself (the robot didn’t “win” by the actual rules of the game) and shows the amazing potential of the human-machine interfaces of tomorrow. Read More

#human, #robotics

AI Software Reveals the Inner Workings of Short-Term Memory

Research by neuroscientists at the University of Chicago shows how short-term, working memory uses networks of neurons differently depending on the complexity of the task at hand.

The researchers used modern artificial intelligence (AI) techniques to train computational neural networks to solve a range of complex behavioral tasks that required storing information in short term memory. The AI networks were based on the biological structure of the brain and revealed two distinct processes involved in short-term memory. One, a “silent” process where the brain stores short-term memories without ongoing neural activity, and a second, more active process where circuits of neurons fire continuously. Read More

#human

Neuroscience-Inspired Artificial Intelligence

The fields of neuroscience and artificial intelligence (AI) have a long and intertwined history. In more recent times, however, communication and collaboration between the two fields has become less commonplace. In this article, we argue that better understanding biological brains could play a vital role in building intelligent machines. We survey historical interactions between the AI and neuroscience fields and emphasize current advances in AI that have been inspired by the study of neural computation in humans and other animals. We conclude by highlighting shared themes that may be key for advancing future research in both fields. Read More

#human

Watching AI Slowly Forget a Human Face Is Incredibly Creepy

A programmer created an algorithmically-generated face, and then made the network slowly forget what its own face looked like.

The result, a piece of video art titled “What I saw before the darkness,” is an eerie time-lapse view of the inside of a demented AI’s mind as its artificial neurons are switched off, one by one, HAL 9000 style. Read More

#human, #image-recognition