How Mirroring the Architecture of the Human Brain Is Speeding Up AI Learning

While AI can carry out some impressive feats when trained on millions of data points, the human brain can often learn from a tiny number of examples. New research shows that borrowing architectural principles from the brain can help AI get closer to our visual prowess.

The prevailing wisdom in deep learning research is that the more data you throw at an algorithm, the better it will learn. 

… This prompted a pair of neuroscientists to see if they could design an AI that could learn from few data points by borrowing principles from how we think the brain solves this problem. In a paper in Frontiers in Computational Neuroscience, they explained that the approach significantly boosts AI’s ability to learn new visual concepts from few examples. Read More

#human

Superintelligence cannot be contained: Lessons from Computability Theory

Superintelligence is a hypothetical agent that possesses intelligence far surpassing that of the brightest and most gifted human minds. In light of recent advances in machine intelligence, a number of scientists, philosophers and technologists have revived the discussion about the potential catastrophic risks entailed by such an entity. In this article, we trace the origins and development of the neo-fear of superintelligence, and some of the major proposals for its containment. We argue that such containment is, in principle, impossible, due to fundamental limits inherent to computing itself. Assuming that a superintelligence will contain a program that includes all the programs that can be executed by a universal Turing machine on input potentially as complex as the state of the world, strict containment requires simulations of such a program, something theoretically (and practically) infeasible. Read More

#human, #singularity

Model describes complete grasping movement planning in the brain

Neuroscientists at the German Primate Center (DPZ)-Leibniz Institute for Primate Research in Göttingen have developed a model that can seamlessly represent the entire planning of movement from seeing an object to grasping it. Comprehensive neural and motor data from grasping experiments with two rhesus monkeys provided decisive results for the development of the model, an artificial neural network that is able to simulate processes and interactions in the brain after training with images of specific objects. The neuronal data from the artificial network model were able to explain the complex biological data from the animal experiments and thus prove the validity of the functional model. This could be used in the long term for the development of better neuroprostheses, for example, to bridge the damaged nerve connection between brain and extremities in paraplegia and thus restore the transmission of movement commands from the brain to arms and legs. Read More

#human

Your Brain Doesn’t Work the Way You Think It Does

A conversation with neuroscientist Lisa Feldman Barrett on the counterintuitive ways your mind processes reality—and why understanding that might help you feel a little less anxious.

At the very beginning of her new book Seven and a Half Lessons About the Brain, psychology professor Lisa Feldman Barrett writes that each chapter will present “a few compelling scientific nuggets about your brain and considers what they might reveal about human nature.” Though it’s an accurate description of what follows, it dramatically undersells the degree to which each lesson will enlighten and unsettle you. It’s like lifting up the hood of a car to see an engine, except that the car is you and you find an engine that doesn’t work at all like you thought it did.

For instance, consider the fourth lesson, You Brain Predicts (Almost) Everything You Do. “Neuroscientists like to say that your day-today experience is a carefully controlled hallucination, constrained by the world and your body but ultimately constructed by your brain,” writes Dr. Barrett, who is a University Distinguished Professor at Northeastern and who has research appointments at Harvard Medical School and Massachusetts General Hospital. “It’s an everyday kind of hallucination that creates all of your experiences and guides all your actions. It’s the normal way that your brain gives meaning to the sensory inputs from your body and from the world (called “sense data”), and you’re almost always unaware that it’s happening.” Read ore

#human

Artificial General Intelligence: A technology with more Cons than Pros

It is not every day that humans are exposed to questions like what will happen if technology exceeds the human thought process. Or what will happen if machines became conscious or start having conscience so that they can take decisions, equivalent to that of humans? Sounds Catastrophic, Right? However, scientists and researchers are looking out for an alternative solution that can perform tasks which the traditional artificial intelligence and its subsidiaries cannot.

… Research labs like OpenAI are already working to create AI more diverse so that technology can contribute to human advancement. However, with curiosity about AGI, the researchers need to pause and analyse, what is in store for humans if this technology actually comes into existence? Read More

#human

The Future of AI is Artificial Sentience

How do you *feel* about that?

Much of today’s discussion around the future of artificial intelligence is focused on the possibility of achieving artificial general intelligence. Essentially, an AI capable of tackling an array of random tasks and working out how to tackle a new task on its own, much like a human, is the ultimate goal. But the discussion around this kind of intelligence seems less about if and more about when at this stage in the game. With the advent of neural networks and deep learning, the sky is the actual limit, at least that will be true once other areas of technology overcome their remaining obstructions. Read More

#human

Is Artificial Intelligence Closer to Common Sense?

Artificial intelligence researchers have not been successful in giving intelligent agents the common-sense knowledge they need to reason about the world. Without this knowledge, it is impossible for intelligent agents to truly interact with the world. Traditionally, there have been two unsuccessful approaches to getting computers to reason about the world—symbolic logic and deep learning. A new project, called COMET, tries to bring these two approaches together. Although it has not yet succeeded, it offers the possibility of progress. Read More

#human

Detailed Analysis On AI Warnings: Is It Really The “Biggest Existential Threat”?

The rapid evolutions of Artificial intelligence have reared a load of discussions; where some of its proponents are finding it immensely powerful to solve major social and health issues, there are other tech leaders and renowned scientists notifying its terrible warnings. The critics warned about the alarming situations, and have gone so far as to declare it “dangerous than nuke”, “biggest existential threat”, “a fundamental risk to the existence of human civilization. and “it can end mankind”. Pondering upon the criticality of these cautions, it’s time to put aside bitmojis and focus on the depth of the matter. Read More

#artificial-intelligence, #human

Is AI an Existential Threat?

When discussing Artificial Intelligence (AI), a common debate is whether AI is an existential threat. The answer requires understanding the technology behind Machine Learning (ML), and recognizing that humans have the tendency to anthropomorphize.  We will explore two different types of AI,  Artificial Narrow Intelligence (ANI) which is available now and is cause for concern, and the threat which is most commonly associated with apocalyptic renditions of AI which is Artificial General Intelligence (AGI).

… With its focus on whatever operation it is responsible for, ANI systems are unable to use generalized learning in order to take over the world. That is the good news; the bad news is that with its reliance on a human operator the AI system is susceptible to biased data, human error, or even worse, a rogue human operator. Read More

#artificial-intelligence, #human

GPT3 and AGI: Beyond the Dichotomy

Earlier this week, I spoke at an interesting online event organized by Khaleej times in the UAE (UAE’s longest running daily English newspaper).

This two-part blog is based on the talk. I addressed a hard topic – and one which I hope sparks some discussion.

To summarize my talk:

  • The discussion of whether GPT3 is AGI or not is dominated by either hype or dichotomy.
  • We need to think past both these polarizing mindsets because hype misleads discussion and dichotomy stifles discussion.
  • If we do so, then what are the implications for AGI?
Read More: Part 1Part 2

#human