Discontinuities And General Artificial Intelligence

…Today I want to talk about predictions of when we reach a more general version of artificial intelligence, similar to a human brain, and ask what we’ve learned.  There have been a few approaches to this over the years.  One that I was a big fan of was this 2015 WaitButWhy piece on the AI revolution.  The argument in this piece is that AI progress is doubling and we are expecting a linear trend, but that doubling will explode the AI capabilities of machines sooner than we thought.  I admit that I was a big fan of this argument, but it increasingly looks incorrect.  While it is possible that this is true, and that we are still just in the early stages of the trend, it increasingly looks like the marginal gains from existing approaches to AI are decling and won’t get us to general AI.

The other big prediction about when we get there is Ray Kurzweil’s extrapolation of computing power, noting that next year, in 2023, the amount of compute you can buy for $1000 will surpass the compute available in the human brain, bringing us close go general AI.  Of course, that only works if the key to AI is raw compute power.  It increasingly looks like that may be wrong. Read More

#human, #singularity

CHI’22 Preprint Collection HCI + AI

Looking for current research on HCI + AI? Here’s a list.

Here’s a collection of CHI’22 preprints on topics related to computational HCI, data, algorithms, AI and related methodology, inclusively interpreted. It’s not a comprehensive list and in no particular order. Found via Twitter and arXiv. Read More

#human

A replay of life: What happens in our brain when we die?

Neuroscientists have recorded the activity of a dying human brain and discovered rhythmic brain wave patterns around the time of death that are similar to those occurring during dreaming, memory recall, and meditation. Now, a study published to Frontiers brings new insight into a possible organizational role of the brain during death and suggests an explanation for vivid life recall in near-death experiences.

Imagine reliving your entire life in the space of seconds. Like a flash of lightning, you are outside of your body, watching memorable moments you lived through. This process, known as ‘life recall’, can be similar to what it’s like to have a near-death experience. What happens inside your brain during these experiences and after death are questions that have puzzled neuroscientists for centuries. However, a new study published to Frontiers in Aging Neuroscience suggests that your brain may remain active and coordinated during and even after the transition to death, and be programmed to orchestrate the whole ordeal.

When an 87-year-old patient developed epilepsy, Dr Raul Vicente of the University of Tartu, Estonia and colleagues used continuous electroencephalography (EEG) to detect the seizures and treat the patient. During these recordings, the patient had a heart attack and passed away. This unexpected event allowed the scientists to record the activity of a dying human brain for the first time ever. Read More

#human

Competitive programming with AlphaCode

Creating solutions to unforeseen problems is second nature in human intelligence – a result of critical thinking informed by experience. The machine learning community has made tremendous progress in generating and understanding textual data, but advances in problem solving remain limited to relatively simple maths and programming problems, or else retrieving and copying existing solutions. As part of DeepMind’s mission to solve intelligence, we created a system called AlphaCode that writes computer programs at a competitive level. AlphaCode achieved an estimated rank within the top 54% of participants in programming competitions by solving new problems that require a combination of critical thinking, logic, algorithms, coding, and natural language understanding.

In our preprint, we detail AlphaCode, which uses transformer-based language models to generate code at an unprecedented scale, and then smartly filters to a small set of promising programs.

We validated our performance using competitions hosted on Codeforces, a popular platform which hosts regular competitions that attract tens of thousands of participants from around the world who come to test their coding skills. We selected for evaluation 10 recent contests, each newer than our training data. AlphaCode placed at about the level of the median competitor, marking the first time an AI code generation system has reached a competitive level of performance in programming competitions.

To help others build on our results, we’re releasing our dataset of competitive programming problems and solutions on GitHub, including extensive tests to ensure the programs that pass these tests are correct — a critical feature current datasets lack. We hope this benchmark will lead to further innovations in problem solving and code generation. Read More

#human, #nlp, #devops

Meta’s ‘data2vec’ is a step toward One Neural Network to Rule Them All

The race is on to create one neural network that can process multiple kinds of data — a more-general artificial intelligence that doesn’t discriminate about types of data but instead can crunch them all within the same basic structure.

The genre of multi-modality, as these neural networks are called, is seeing a flurry of activity in which different data, such as image, text, and speech audio, are passed through the same algorithm to produce a score on different tests such as image recognition, natural language understanding, or speech detection.

And these ambidextrous networks are racking up scores on benchmark tests of AI. The latest achievement is what’s called “data2vec,” developed by researchers at the AI division of Meta (parent of Facebook, Instagram, and WhatsApp).

The point, as Meta researcher Alexei Baevski, Wei-Ning Hsu, Qiantong Xu, Arun Babu, Jiatao Gu, and Michael Auli reveal in a blog post, is to approach something more like the general learning ability that the human mind seems to encompass. Read More

#human, #multi-modal

Meta’s new learning algorithm can teach AI to multi-task

If you can recognize a dog by sight, then you can probably recognize a dog when it is described to you in words. Not so for today’s artificial intelligence. Deep neural networks have become very good at identifying objects in photos and conversing in natural language, but not at the same time: there are AI models that excel at one or the other, but not both.

Part of the problem is that these models learn different skills using different techniques. This is a major obstacle for the development of more general-purpose AI, machines that can multi-task and adapt. It also means that advances in deep learning for one skill often do not transfer to others.

A team at Meta AI (previously Facebook AI Research) wants to change that. The researchers have developed a single algorithm that can be used to train a neural network to recognize images, text, or speech. The algorithm, called Data2vec, not only unifies the learning process but performs at least as well as existing techniques in all three skills. “We hope it will change the way people think about doing this type of work,” says Michael Auli, a researcher at Meta AI. Read More

#big7, #human, #multi-modal

Advanced AIs Exhibiting Depression and Addiction, Scientists Say

It turns out that artificial intelligence chatbots may be more like us than you’d think.

A new preprint study out of the Chinese Academy of Science (CAS) claims that many big name chatbots, when asked the types of questions generally used as cursory intake queries for depression and alcoholism, appeared to be both “depressed” and “addicted.” Read More

#human, #robotics

Watch our interview with Ameca, a humanoid #robot at #CES2022 #Shorts

Read More

#human, #robotics, #videos

Brain cell differences could be key to learning in humans and AI

Imperial researchers have found that variability between brain cells might speed up learning and improve the performance of the brain and future artificial intelligence (AI).

The new study found that by tweaking the electrical properties of individual cells in simulations of brain networks, the networks learned faster than simulations with identical cells. Read More

#human

Artificial intelligence sheds light on how the brain processes language

Neuroscientists find the internal workings of next-word prediction models resemble those of language-processing centers in the brain.

In the past few years, artificial intelligence models of language have become very good at certain tasks. Most notably, they excel at predicting the next word in a string of text; this technology helps search engines and texting apps predict the next word you are going to type.

The most recent generation of predictive language models also appears to learn something about the underlying meaning of language. These models can not only predict the word that comes next, but also perform tasks that seem to require some degree of genuine understanding, such as question answering, document summarization, and story completion. 

Such models were designed to optimize performance for the specific function of predicting text, without attempting to mimic anything about how the human brain performs this task or understands language. But a new study from MIT neuroscientists suggests the underlying function of these models resembles the function of language-processing centers in the human brain. Read More

#nlp, #human