Tag Archives: Singularity
The evolution of cognitive architecture will deliver human-like AI
There’s no one right way to build a robot, just as there’s no singular means of imparting it with intelligence. Last month, Engadget spoke withCarnegie Mellon University associate research professor and the director of the Resilient Intelligent Systems Lab, Nathan Michael, whose work involves stacking and combining a robot’s various piecemeal capabilities together as it learns them into an amalgamated artificial general intelligence (AGI). Think, a Roomba that learns how to vacuum, then learns how to mop, then learns how to dust and do dishes — pretty soon, you’ve got Rosie from The Jetsons.
But attempting to model an intelligence after either the ephemeral human mind or the exact physical structure of the brain (rather than iterating increasingly capable Roombas) is no small task — and with no small amount of competing hypotheses and models to boot. In fact, a 2010 survey of the field found more than two dozen such cognitive architectures actively being studied. Read More
Artificial Intelligence: Mankind’s Last Invention
The Threat of Google’s DeepMind
If you consider Google is the leader globally in artificial intelligence, DeepMind is their crown jewel.
When they moved the DeepMind Health unit, the healthcare subsidiary, into their main company — that broke a pledge that ‘data will not be connected to Google accounts’ — you knew Google was cutting corners.
Google’s AI Supremacy is an Existential Threat
Bigger than the Department of Justice going after Google for antitrust is the harm DeepMind could do to the future of artificial intelligence. They are arguably the leader in deep learning. The choices they make will decide many things about the fate of humanity in an AI-centric world.
The next real interface after smart phones is the neural interface and a Google powered neural interface (beyond ear buds and Voice AI) will power the next era of augmented humans. Read More
Risks From General Artificial Intelligence Without an Intelligence Explosion
Artificial intelligence systems we have today can be referred to as narrow AI – they perform well at specific tasks, like playing chess or Jeopardy, and some classes of problems like Atari games. Many experts predict that general AI, which would be able to perform most tasks humans can, will be developed later this century, with median estimates around 2050. When people talk about long term existential risk from the development of general AI, they commonly refer to the intelligence explosion (IE) scenario. AI risk skeptics oftenargue against AI safety concerns along the lines of “Intelligence explosion sounds like science-fiction and seems really unlikely, therefore there’s not much to worry about”. It’s unfortunate when AI safety concerns are rounded down to worries about IE. Unlike I. J. Good, I do not consider this scenario inevitable (though relatively likely), and I would expect general AI to present an existential risk even if I knew for sure that intelligence explosion were impossible. Read More
Reimagining The Future: A Journey Through The Looking Glass
Can we stop AI outsmarting humanity?
It began three and a half billion years ago in a pool of muck, when a molecule made a copy of itself and so became the ultimate ancestor of all earthly life.
It began four million years ago, when brain volumes began climbing rapidly in the hominid line.
Fifty thousand years ago with the rise of Homo sapiens sapiens.
Ten thousand years ago with the invention of civilization.
Five hundred years ago with the invention of the printing press.
Fifty years ago with the invention of the computer.
In less than thirty years, it will end. Read More