The singularity is very close

Within one century, biological intelligence will be a tiny minority of all sentient life. It will be very rare to be human. It will be very rare to have cells and blood and a heart. Human beings will be outnumbered a thousand to one by conscious machine intelligences.

Artificial General Intelligence (AGI)1 is about to go from being science fiction to being part of everybody’s day-to-day life. It’s also going to happen in the blink of an eye — because once it gets loose, there is no stopping it from scaling itself incredibly rapidly. Whether we want it to or not, it will impact every human being’s life.

Some people believe the singularity won’t happen for a very long time, or at all. I’d like to discuss why I am nearly certain it will happen in the next 20 years. My overall prediction is based on 3 hypotheses:

  1. Scale is not the solution.
  2. AI will design AGI.
  3. The ball is already rolling.
Read More

#singularity

Discontinuities And General Artificial Intelligence

…Today I want to talk about predictions of when we reach a more general version of artificial intelligence, similar to a human brain, and ask what we’ve learned.  There have been a few approaches to this over the years.  One that I was a big fan of was this 2015 WaitButWhy piece on the AI revolution.  The argument in this piece is that AI progress is doubling and we are expecting a linear trend, but that doubling will explode the AI capabilities of machines sooner than we thought.  I admit that I was a big fan of this argument, but it increasingly looks incorrect.  While it is possible that this is true, and that we are still just in the early stages of the trend, it increasingly looks like the marginal gains from existing approaches to AI are decling and won’t get us to general AI.

The other big prediction about when we get there is Ray Kurzweil’s extrapolation of computing power, noting that next year, in 2023, the amount of compute you can buy for $1000 will surpass the compute available in the human brain, bringing us close go general AI.  Of course, that only works if the key to AI is raw compute power.  It increasingly looks like that may be wrong. Read More

#human, #singularity

Humans Will Not Be Able To Control Superintelligent Artificial Intelligence, Study Shows

A new study has warned that it will become impossible to predict the actions of superintelligent artificial intelligence (AI), raising questions over whether humans may eventually lose control.

Research conducted by the Max-Planck Institute for Humans and Machines and published in the Journal of Artificial Intelligence Research has found that in order to accurately predict what an individual AI is going do, scientists would have to run an exact simulation of the system – a feat that will grow more difficult as AI systems become more and more advanced. Read More

#singularity

Superintelligence cannot be contained: Lessons from Computability Theory

Superintelligence is a hypothetical agent that possesses intelligence far surpassing that of the brightest and most gifted human minds. In light of recent advances in machine intelligence, a number of scientists, philosophers and technologists have revived the discussion about the potential catastrophic risks entailed by such an entity. In this article, we trace the origins and development of the neo-fear of superintelligence, and some of the major proposals for its containment. We argue that such containment is, in principle, impossible, due to fundamental limits inherent to computing itself. Assuming that a superintelligence will contain a program that includes all the programs that can be executed by a universal Turing machine on input potentially as complex as the state of the world, strict containment requires simulations of such a program, something theoretically (and practically) infeasible. Read More

#human, #singularity

We Have Already Let The Genie Out of The Bottle

How will we make sure that Artificial Intelligence won’t run amok and will be a force for good?

There are many areas where governance frameworks and international agreements about the use of artificial intelligence (AI) are needed. For example, there is an urgent need for internationally shared rules governing autonomous weapons and the use of facial recognition to target minorities and suppress dissent. Eliminating bias in algorithms for criminal sentencing, credit allocation, social media curation and many other areas should be an essential focus for both research and the spread of best practices. Read More

#artificial-intelligence, #singularity, #bias

Could Super Artificial Intelligence Be, In Some Sense, Alive?

Should you feel bad about pulling the plug on a robot or switch off an artificial intelligence algorithm? Not for the moment. But how about when our computers become as smart—or smarter—than us?

Philosopher Borna Jalšenjak  of the Luxembourg School of Business has been thinking about that. He has a chapter, “The Artificial Intelligence Singularity: What It Is and What It Is Not,” in Guide to Deep Learning Basics: Logical, Historical and Philosophical Perspectives, in which he explores the case for “thinking machines” being alive, even if they are machines. Read More

#singularity

Is The Goal-Driven Systems Pattern The Key To Artificial General Intelligence (AGI)?

Since the beginnings of artificial intelligence, researchers have long sought to test the intelligence of machine systems by having them play games against humans. It is often thought that one of the hallmarks of human intelligence is the ability to think creatively, consider various possibilities, and keep a long-term goal in mind while making short-term decisions. If computers can play difficult games just as well as humans then surely they can handle even more complicated tasks. From early checkers-playing bots developed in the 1950s to today’s deep learning-powered bots that can beat even the best players in the world at games like chess, Go and DOTA, the idea of machines that can find solutions to puzzles is as old as AI itself, if not olderRead More

#artificial-intelligence, #human, #singularity

27

Read More

#singularity, #videos

World’s First ‘Living Machine’ Created Using Frog Cells and Artificial Intelligence

Scientists used computer algorithms to “evolve” an organism that’s made of 100% frog DNA — but it isn’t a frog.

What happens when you take cells from frog embryos and grow them into new organisms that were “evolved” by algorithms? You get something that researchers are calling the world’s first “living machine.”

Though the original stem cells came from frogs — the African clawed frog, Xenopus laevis — these so-called xenobots don’t resemble any known amphibians. Read More

#artificial-intelligence, #singularity

If AI Suddenly Gains Consciousness, Some Say It Will Happen First In AI Self-Driving Cars

There has been a lot of speculation that one of these days there will be an AI system that suddenly and unexpectedly gives rise to consciousness.

Often referred to as the singularity, there is much hand-wringing that we are perhaps dooming ourselves to either utter death and destruction or to becoming slaves of AI once the singularity occurs. Read More

#singularity