Star Trek actor William Shatner is celebrating his 90th birthday by creating an AI-powered version of himself that’ll live forever.
We use the term AI pretty loosely, though. No, scientists aren’t going to scan Shatner’s brain and build an AI from his neural activity. (Nor is his head ending up in a green jar like it did in the sci-fi cartoon Futurama.)
Instead, a company called StoryFile is taping interviews with Shatner to create an interactive video program discussing his life. The AI component is how watchers can verbally ask the video program a question. StoryFile’s system will then look through the footage for a video segment that supplies the best response. Read More
Tag Archives: Human
AI armed with multiple senses could gain more flexible intelligence
Human intelligence emerges from our combination of senses and language abilities. Maybe the same is true for artificial intelligence.
In late 2012, AI scientists first figured out how to get neural networks to “see.” They proved that software designed to loosely mimic the human brain could dramatically improve existing computer-vision systems. The field has since learned how to get neural networks to imitate the way we reason, hear, speak, and write.
But while AI has grown remarkably human-like—even superhuman—at achieving a specific task, it still doesn’t capture the flexibility of the human brain. We can learn skills in one context and apply them to another. By contrast, though DeepMind’s game-playing algorithm AlphaGo can beat the world’s best Go masters, it can’t extend that strategy beyond the board. Deep-learning algorithms, in other words, are masters at picking up patterns, but they cannot understand and adapt to a changing world. Read More
Inside Facebook Reality Labs: The Next Era of Human-Computer Interaction
Facebook Reality Labs (FRL) Chief Scientist Michael Abrash has called AR interaction “one of the hardest and most interesting multi-disciplinary problems around,” because it’s a complete paradigm shift in how humans interact with computers. The last great shift began in the 1960s when Doug Engelbart’s team invented the mouse and helped pave the way for the graphical user interfaces (GUIs) that dominate our world today. The invention of the GUI fundamentally changed HCI for the better — and it’s a sea change that’s held for decades.
But all-day wearable AR glasses require a new paradigm because they will be able to function in every situation you encounter in the course of a day. They need to be able to do what you want them to do and tell you what you want to know when you want to know it, in much the same way that your own mind works — seamlessly sharing information and taking action when you want it, and not getting in your way otherwise. Read More
The Thousand Brains Theory of Intelligence
In our most recent peer-reviewed paper, A Framework for Intelligence and Cortical Function Based on Grid Cells in the Neocortex, we put forward a novel theory for how the neocortex works. The Thousand Brains Theory of Intelligence proposes that rather than learning one model of an object (or concept), the brain builds many models of each object. Each model is built using different inputs, whether from slightly different parts of the sensor (such as different fingers on your hand) or from different sensors altogether (eyes vs. skin). The models vote together to reach a consensus on what they are sensing, and the consensus vote is what we perceive. It’s as if your brain is actually thousands of brains working simultaneously.
A key insight of our theory is based on an understanding of grid cells, neurons which are found in an older part of the brain responsible for navigation and knowing where you are in the world. Scientists have made great progress over the past few decades in understanding that the function of grid cells is to represent the location of a body in an environment. Recent experimental evidence suggests that grid cells also are present in the neocortex. We propose that grid cells exist throughout the neocortex, in every region and in every cortical column, and that they define a location-based framework for how the neocortex works. The same grid cell-based mechanism used in the older part of the brain to learn the structure of environments is used by the neocortex to learn the structure of objects, not only what they are, but also how they behave. Read More
A New Artificial Intelligence Makes Mistakes—on Purpose
A chess program that learns from human error might be better at working with people or negotiating with them.
It took about 50 years for computers to eviscerate humans in the venerable game of chess. A standard smartphone can now play the kind of moves that make a grandmaster’s head spin. But one artificial intelligence program is taking a few steps backward, to appreciate how average humans play—blunders and all.
The AI chess program, known as Maia, uses the kind of cutting-edge AI behind the best superhuman chess-playing programs. But instead of learning how to destroy an opponent on the board, Maia focuses on predicting human moves, including the mistakes they make. Read More
Machines Are Inventing New Math We’ve Never Seen
Pushing the boundaries of math requires great minds to pose fascinating problems. What if a machine could do it? Now, scientists created one that can.
… A group of researchers from the Technion in Israel and Google in Tel Aviv presented an automated conjecturing system that they call the Ramanujan Machine, named after the mathematician Srinivasa Ramanujan. … As the researchers explain in the paper, the entire discipline of mathematics can be broken down into two processes, crudely speaking: conjecturing things and proving things. Given more conjectures, there is more grist for the mill of the mathematical mind, more for mathematicians to prove and explain. …The researchers’ system is not, however, a universal mathematics machine. Rather, it conjectures formulas for how to compute the value of specific numbers called universal constants. Read More
Artificial intelligence in longevity medicine
Recent advances in deep learning enabled the development of AI systems that outperform humans in many tasks and have started to empower scientists and physicians with new tools. In this Comment, we discuss how recent applications of AI to aging research are leading to the emergence of the field of longevity medicine. Read More
New MIT brain research shows how AI could help us understand consciousness
A team of researchers from MIT and Massachusetts General Hospital recently published a study linking social awareness to individual neuronal activity. To the best of our knowledge, this is the first time evidence for the ‘theory of mind‘ has been identified at this scale. Read More
Read the paper.
Can AI Machine Learning Enable Robot Empathy?
Columbia University AI researchers enable machines to be more human-like.
Artificial intelligence (AI) machine learning is fueling the current commercial boom in automation, and robots are becoming increasingly more sophisticated. In a step forward in endowing robots with human-like behavior, researchers at Columbia University showed how AI machine learning can predict a robot’s future actions by observation and published their results earlier this month in Nature Scientific Reports. Read More
Evolvable neural units that can mimic the brain’s synaptic plasticity
Machine learning techniques are designed to mathematically emulate the functions and structure of neurons and neural networks in the brain. However, biological neurons are very complex, which makes artificially replicating them particularly challenging.
Researchers at Korea University have recently tried to reproduce the complexity of biological neurons more effectively by approximating the function of individual neurons and synapses. Their paper, published in Nature Machine Intelligence, introduces a network of evolvable neural units (ENUs) that can adapt to mimic specific neurons and mechanisms of synaptic plasticity. Read More