Reframing Superintelligence — Comprehensive AI Services as General Intelligence

Studies of superintelligent-level systems have typically posited AI func-tionality that plays the role of a mind in a rational utility-directed agent,and hence employ an abstraction initially developed as an idealized model of human decision makers. Today, developments in AI technology highlight intelligent systems that are quite unlike minds, and provide a basis for a different approach to understanding them: Today, we can consider how AI systems are produced (through the work of research and development), what they do (broadly, provide services by performing tasks), and what they will enable (including incremental yet potentially thorough automation of human tasks).

Because tasks subject to automation include the tasks that comprise AI research and development, current trends in the field promise accelerating AI-enabled advances in AI technology itself, potentially leading to asymptotically recursive improvement of AI technologies in distributed systems, a prospect that contrasts sharply with the vision of self-improvement internal to opaque, unitary agents. Read More

#human, #singularity

Measuring Progress Toward AGI Is Hard

Artificial General Intelligence (AGI) is still a ways off in the future but surprisingly there’s been very little conversation about how to measure if we’re getting close.  This article reviews a proposal to benchmark existing AIs against animal capabilities in an Animal-AI Olympics.  It’s a real thing and just now accepting entrants. Read More

#singularity

The neuroscience of imagination – Andrey Vyshedskiy

Read More

#human, #singularity

The evolution of cognitive architecture will deliver human-like AI

There’s no one right way to build a robot, just as there’s no singular means of imparting it with intelligence. Last month, Engadget spoke withCarnegie Mellon University associate research professor and the director of the Resilient Intelligent Systems Lab, Nathan Michael, whose work involves stacking and combining a robot’s various piecemeal capabilities together as it learns them into an amalgamated artificial general intelligence (AGI). Think, a Roomba that learns how to vacuum, then learns how to mop, then learns how to dust and do dishes — pretty soon, you’ve got Rosie from The Jetsons.

But attempting to model an intelligence after either the ephemeral human mind or the exact physical structure of the brain (rather than iterating increasingly capable Roombas) is no small task — and with no small amount of competing hypotheses and models to boot. In fact, a 2010 survey of the field found more than two dozen such cognitive architectures actively being studied. Read More

#human, #singularity

Artificial Intelligence: Mankind’s Last Invention

Read More

#singularity, #videos

The Threat of Google’s DeepMind

If you consider Google is the leader globally in artificial intelligence, DeepMind is their crown jewel.

When they moved the DeepMind Health unit, the healthcare subsidiary, into their main company — that broke a pledge that ‘data will not be connected to Google accounts’ — you knew Google was cutting corners.

Google’s AI Supremacy is an Existential Threat

Bigger than the Department of Justice going after Google for antitrust is the harm DeepMind could do to the future of artificial intelligence. They are arguably the leader in deep learning. The choices they make will decide many things about the fate of humanity in an AI-centric world.

The next real interface after smart phones is the neural interface and a Google powered neural interface (beyond ear buds and Voice AI) will power the next era of augmented humans. Read More

#artificial-intelligence, #singularity

Risks From General Artificial Intelligence Without an Intelligence Explosion

Artificial intelligence systems we have today can be referred to as narrow AI – they perform well at specific tasks, like playing chess or Jeopardy, and some classes of problems like Atari games. Many experts predict that general AI, which would be able to perform most tasks humans can, will be developed later this century, with median estimates around 2050. When people talk about long term existential risk from the development of general AI, they commonly refer to the intelligence explosion (IE) scenario. AI risk skeptics oftenargue against AI safety concerns along the lines of “Intelligence explosion sounds like science-fiction and seems really unlikely, therefore there’s not much to worry about”. It’s unfortunate when AI safety concerns are rounded down to worries about IE. Unlike I. J. Good, I do not consider this scenario inevitable (though relatively likely), and I would expect general AI to present an existential risk even if I knew for sure that intelligence explosion were impossible. Read More

#singularity

Reimagining The Future: A Journey Through The Looking Glass

Read More

#singularity, #videos

Can we stop AI outsmarting humanity?

It began three and a half billion years ago in a pool of muck, when a molecule made a copy of itself and so became the ultimate ancestor of all earthly life.

It began four million years ago, when brain volumes began climbing rapidly in the hominid line.

Fifty thousand years ago with the rise of Homo sapiens sapiens.

Ten thousand years ago with the invention of civilization.

Five hundred years ago with the invention of the printing press.

Fifty years ago with the invention of the computer.

In less than thirty years, it will end. Read More

#artificial-intelligence, #singularity