DNA isn’t the only molecule we could use for digital storage. It turns out that solutions containing sugars, amino acids and other small molecules could replace hard drives too.
Jacob Rosenstein and his colleagues at Brown University, Rhode Island, stored and retrieved pictures of an Egyptian cat, an ibex and an anchor using an array of these small molecules. They say the approach could make storage that is less vulnerable to hacking and that could function in more extreme environmental conditions.
Inspired by recent research showing that it is possible to store data on DNA, Rosenstein’s team wanted to see if smaller and simpler molecules could also encode abstract information. Read More
Tag Archives: Human
The neuroscience of imagination – Andrey Vyshedskiy
A Standard Model of the Mind: Toward a Common Computational Framework across Artificial Intelligence, Cognitive Science, Neuroscience, and Robotics
A standard model captures a community consensus over a coherent region of science, serving as a cumulative reference point for the field that can provide guidance for both research and applications, while also focusing efforts to extend or revise it. Here we propose developing such a model for human like minds, computational entities whose structures and processes are substantially similar to those found in human cognition. Our hypothesis is that cognitive architectures provide the appropriate computational abstraction for defining a standard model, although the standard model is not itself such an architecture. The proposed standard model began as an initial consensus at the 2013 AAAI Fall Symposium on Integrated Cognition, but is extended here via a synthesis across three existing cognitive architectures: ACT-R, Sigma, and Soar. The resulting standard model spans key aspects of structure and processing, memory and content, learning, and perception and motor; highlighting loci of architectural agreement as well as disagreement with the consensus while identifying potential areas of remaining incompleteness. The hope is that this work will provide an important step towards engaging the broader community in further development of the standard model of the mind. Read More
The evolution of cognitive architecture will deliver human-like AI
There’s no one right way to build a robot, just as there’s no singular means of imparting it with intelligence. Last month, Engadget spoke withCarnegie Mellon University associate research professor and the director of the Resilient Intelligent Systems Lab, Nathan Michael, whose work involves stacking and combining a robot’s various piecemeal capabilities together as it learns them into an amalgamated artificial general intelligence (AGI). Think, a Roomba that learns how to vacuum, then learns how to mop, then learns how to dust and do dishes — pretty soon, you’ve got Rosie from The Jetsons.
But attempting to model an intelligence after either the ephemeral human mind or the exact physical structure of the brain (rather than iterating increasingly capable Roombas) is no small task — and with no small amount of competing hypotheses and models to boot. In fact, a 2010 survey of the field found more than two dozen such cognitive architectures actively being studied. Read More
World Models
We explore building generative neural network models of popular reinforcement learning environments. Our world model can be trained quickly in an unsupervised manner to learn a compressed spatial and temporal representation of the environment. By using features extracted from the world model as inputs to an agent, we can train a very compact and simple policy that can solve the required task. We can even train our agent entirely inside of its own hallucinated dream generated by its world model, and transfer this policy back into the actual environment. Read More
AlphaStar: An Evolutionary Computation Perspective
In January 2019, DeepMind revealed AlphaStar to the world—the first artificial intelligence (AI) system to beat a professional player at the game of StarCraft II—representing a milestone in the progress of AI. AlphaStar draws on many areas of AI research, including deep learning, reinforcement learning, game theory, and evolutionary computation (EC). In this paper we analyze AlphaStar primarily through the lens of EC, presenting a new look at the system and relating it to many concepts in the field. We highlight some of its most interesting aspects—the use of Lamarckian evolution,competitive co-evolution, and quality diversity. In doing so,we hope to provide a bridge between the wider EC community and one of the most significant AI systems developed in recent times. Read More
The Power of Self-Learning Systems
AI Codes its Own ‘AI Child’ – AutoML
Rock Paper Scissors robot wins 100% of the time
The newest version of a robot from Japanese researchers can not only challenge the best human players in a game of Rock Paper Scissors, but it can beat them — 100% of the time. In reality, the robot uses a sophisticated form a cheating which both breaks the game itself (the robot didn’t “win” by the actual rules of the game) and shows the amazing potential of the human-machine interfaces of tomorrow. Read More
AI Software Reveals the Inner Workings of Short-Term Memory
Research by neuroscientists at the University of Chicago shows how short-term, working memory uses networks of neurons differently depending on the complexity of the task at hand.
The researchers used modern artificial intelligence (AI) techniques to train computational neural networks to solve a range of complex behavioral tasks that required storing information in short term memory. The AI networks were based on the biological structure of the brain and revealed two distinct processes involved in short-term memory. One, a “silent” process where the brain stores short-term memories without ongoing neural activity, and a second, more active process where circuits of neurons fire continuously. Read More