Fuzzy Math Is Key to AI Chip That Promises Human-Like Intuition

Simon Knowles, chief technology officer of Graphcore Ltd., is smiling at a whiteboard as he maps out his vision for the future of machine learning. He uses a black marker to dot and diagram the nodes of the human brain: the parts that are “ruminative, that think deeply, that ponder.” His startup is trying to approximate these neurons and synapses in its next-generation computer processors, which the company is betting can “mechanize intelligence.”

Artificial intelligence is often thought of as complex software that mines vast datasets, but Knowles and his co-founder, Chief Executive Officer Nigel Toon, argue that more important obstacles still exist in the computers that run the software. The problem, they say, sitting in their airy offices in the British port city of Bristol, is that chips—known, depending on their function, as CPUs (central processing units) or GPUs (graphics processing units)—weren’t designed to “ponder” in any recognizably human way. Whereas human brains use intuition to simplify problems such as identifying an approaching friend, a computer might try to analyze every pixel of that person’s face, comparing it to a database of billions of images before attempting to say hello. That precision, which made sense when computers were primarily calculators, is massively inefficient for AI, burning huge quantities of energy to process all the relevant data. Read More

#human, #nvidia

An Explicitly Relational Neural Network Architecture

With a view to bridging the gap between deep learning and symbolic AI, we present a novel end-to-end neural network architecture that learns to form propositional representations with an explicitly relational structure from raw pixel data. In order to evaluate and analyse the architecture, we introduce a family of simple visual relational reasoning tasks of varying complexity. We show that the proposed architecture, when pretrained on a curriculum of such tasks, learns to generate reusable representations that better facilitate subsequent learning on previously unseen tasks when compared to a number of baseline architectures. The workings of a successfully trained model are visualised to shed some light on how the architecture functions. Read More

#human, #neural-networks

Imagining Life as a Stack of Mental States

Contrary to what your intuition may believe, looking forward to playing a tennis match on Sunday afternoon isn’t really about the tennis match at all.

What makes your nightly ritual of blanket-slippers-tea-kindle special hasn’t got much to do with the activity of reading of all.

An overnight mountain climbing trip on the weekend offers something more than an arduous trek up unforgiving soil.

Your obsession with that local Thai joint is thanks to something more fundamentally true than how good their green curry sauce is.

Layered imperceptibly beneath all the above activities is a cardinal fact. A necessary truth, one ubiquitous to all human beings and applicable to every possible (un)pleasurable activity. A truth that shapes decision-making and the design of one’s lifestyle.

Every action, activity, hobby, or ritual is nothing more than the pursuit of a certain mental state. Read More

#human

MIT’s sensor-packed glove helps AI identify objects by touch

Researchers have spent years trying to teach robots how to grip different objects without crushing or dropping them. They could be one step closer, thanks to this low-cost, sensor-packed glove. In a paper published in Nature, a team of MIT scientists share how they used the glove to help AI recognize objects through touch alone. That information could help robots better manipulate objects, and it may aid in prosthetics design.

The “scalable tactile glove,” or STAG, is a simple knit glove packed with more than 550 tiny sensors. The researchers wore STAG while handling 26 different objects — including a soda can, scissors, tennis ball, spoon, pen and a mug. As they did, the sensors gathered pressure-signal data, which was interpreted by a neural network. The system predicted the objects’ identity on touch alone with up to 76 percent accuracy, and it was able to predict the weight of most objects within about 60 grams. Read More

#human, #robotics

A Framework for Intelligence and Cortical Function Based on Grid Cells in the Neocortex

How the neocortex works is a mystery. In this paper we propose a novel framework for understanding its function. Grid cells are neurons in the entorhinal cortex that represent the location of an animal in its environment. Recent evidence suggests that grid cell-like neurons may also be present in the neocortex. We propose that grid cells exist throughout the neocortex, in every region and in every cortical column. They define a location-based framework for how the neocortex functions. Whereas grid cells in the entorhinal cortex represent the location of one thing, the body relative to its environment, we propose that cortical grid cells simultaneously represent the location of many things. Cortical columns in somatosensory cortex track the location of tactile features relative to the object being touched and cortical columns in visual cortex track the location of visual features relative to the object being viewed. We propose that mechanisms in the entorhinal cortex and hippocampus that evolved for learning the structure of environments are now used by the neocortex to learn the structure of objects. Having a representation of location in each cortical column suggests mechanisms for how the neocortex represents object compositionality and object behaviors. It leads to the hypothesis that every part of the neocortex learns complete models of objects and that there are many models of each object distributed throughout the neocortex. The similarity of circuitry observed in all cortical regions is strong evidence that even high-level cognitive tasks are learned and represented in a location-based framework. Read More

#human

Superconducting Optoelectronic Neurons V: Networks and Scaling

Networks of superconducting optoelectronic neurons are investigated for their near-term technological potential and long-term physical limitations. Networks with short average path length, high clustering coefficient, and power-law degree distribution are designed using a growth model that assigns connections between new and existing nodes based on spatial distance as well as degree of existing nodes. The network construction algorithm is scalable to arbitrary levels of network hierarchy and achieves systems with fractal spatial properties and efficient wiring. By modeling the physical size of superconducting optoelectronic neurons, we calculate the area of these networks. A system with 8100 neurons and 330,430 total synapses will fit on a 1 cm × 1 cm die. Systems of millions of neurons with hundreds of millions of synapses will fit on a 300 mm wafer. For multi-wafer assemblies, communication at light speed enables a neuronal pool the size of a large data center (105 m2 ) comprising 100 trillion neurons with coherent oscillations at 1 MHz. Assuming a power law frequency distribution, as is necessary for self-organized criticality, we calculate the power consumption of the networks. We find the use of single photons for communication and superconducting circuits for computation leads to power density low enough to be cooled by liquid 4He for networks of any scale. Read More

#human

How single neurons and brain networks support spatial navigation

Spatial navigation is an essential cognitive function, which is frequently impaired in patients suffering from neurological and psychiatric disorders. Research groups worldwide have studied the neuronal basis of spatial navigation, and the activity of both individual nerve cells and large cell assemblies in the brain appear to play a crucial role in the process. However, the relationship between the behaviour of individual cells and the behaviour of large cell networks has for the most part remained unexplored.

Various theories on this topic were put forward by an international team in the journal “Trends in Cognitive Sciences” from 24 May 2019. The review article was jointly authored by Dr. Lukas Kunz from the University Medical Center in Freiburg, Professor Liang Wang from the Chinese Academy of Sciences in Beijing, and Professor Nikolai Axmacher from Ruhr-Universität Bochum, together with colleagues from Columbia University in New York. Read More

#human

Adding audio data helps AI navigate 3D mazes

Sight is the sense on which humans chiefly rely to navigate the world, but sound might be just as important — it’s been shown that people can learn to follow subtle cues in the volume, direction, and speed of audio signals. Inspired by this, scientists at the University of Eastern Finland recently proposed in a preprint paper (“Do Autonomous Agents Benefit from Hearing?“) an AI system that complements visual data with sound. Preliminary results, they say, indicate that the approach improves agents’ ability to complete goals in a 3D maze.

“Learning using only visual information may not always be easy for the learning agent,” wrote the coauthors. “For example, it is difficult for the agent to reach the target using only visual information in scenarios where there are many rooms and there is no direct line of sight between the agent and the target … Thus, the use of audio features could provide valuable information for such problems.” Read More

#human

Do Autonomous Agents Benefit from Hearing?

Mapping states to actions in deep reinforcement learning is mainly based on visual information. The commonly used approach for dealing with visual information is to extract pixels from images and use them as state representation for reinforcement learning agent. But, any vision only agent is handicapped by not being able to sense audible cues. Using hearing, animals are able to sense targets that are outside of their visual range. In this work, we propose the use of audio as complementary information to visual only in state representation. We assess the impact of such multi-modal setup in reach-the-goal tasks in ViZDoom environment. Results show that the agent improves its behaviour when visual information is accompanied with audio features. Read More

#human

AI develops human-like number sense – taking us a step closer to building machines with general intelligence

Numbers figure pretty high up on the list of what a computer can do well. While humans often struggle to split a restaurant bill, a modern computer can make millions of calculations in a mere second. Humans, however, have an innate and intuitive number sense that helped us, among other things, to build computers in the first place.

Unlike a computer, a human knows when looking at four cats, four apples and the symbol 4 that they all have one thing in common – the abstract concept of “four” – without even having to count them. This illustrates the difference between the human mind and the machine, and helps explain why we are not even close to developing AIs with the broad intelligence that humans possess. But now a new study, published in Science Advances, reports that an AI has spontaneously developed a human-like number sense. Read More

#human