OpenAI is forming a new team to bring ‘superintelligent’ AI under control

OpenAI is forming a new team led by Ilya Sutskever, its chief scientist and one of the company’s co-founders, to develop ways to steer and control “superintelligent” AI systems.

In a blog post published today, Sutskever and Jan Leike, a lead on the alignment team at OpenAI, predict that AI with intelligence exceeding that of humans could arrive within the decade. This AI — assuming it does, indeed, arrive eventually — won’t necessarily be benevolent, necessitating research into ways to control and restrict it, Sutskever and Leike say. — Read More

#singularity

How existential risk became the biggest meme in AI — “Ghost stories are contagious.”

Who’s afraid of the big bad bots? A lot of people, it seems. The number of high-profile names that have now made public pronouncements or signed open letters warning of the catastrophic dangers of artificial intelligence is striking.

… Concerns about runaway, self-improving machines have been around since Alan Turing. Futurists like Vernor Vinge and Ray Kurzweil popularized these ideas with talk of the so-called Singularity, a hypothetical date at which artificial intelligence outstrips human intelligence and machines take over. 

But at the heart of such concerns is the question of control: How do humans stay on top if (or when) machines get smarter? — Read More

#singularity

Shall we play a game?

GREETINGS PROFESSOR FALKEN

SHALL WE PLAY A GAME?

Maybe, but AI is not an arms race (Read More). We need to ask ourselves, Is Avoiding Extinction from AI Really an Urgent Priority? The history of technology suggests that the greatest risks come not from the tech, but from the people who control it (Read More). Somehow, with AI, we leapt immediately to DEFCON 1 (Read More).

So, existential threat or marketing hype?

HOW ABOUT A NICE GAME OF CHESS?

#singularity

350+ industry leaders sign Statement on AI Risk

Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war. — View the List of Signatories

#singularity

AI and the future of humanity 

Read More

#singularity, #videos

“Godfather of AI” Geoffrey Hinton Warns of the “Existential Threat” of AI | Amanpour and Company

Read More

#singularity, #videos

Are Emergent Abilities of Large Language Models a Mirage?

Recent work claims that large language models display emergent abilities, abilities not present in smaller-scale models that are present in larger-scale models. What makes emergent abilities intriguing is two-fold: their sharpness, transitioning seemingly instantaneously from not present to present, and their unpredictability, appearing at seemingly unforeseeable model scales. Here, we present an alternative explanation for emergent abilities: that for a particular task and model family, when analyzing fixed model outputs, one can choose a metric which leads to the inference of an emergent ability or another metric which does not. Thus, our alternative suggests that existing claims of emergent abilities are creations of the researcher’s analyses, not fundamental changes in model behavior on specific tasks with scale. We present our explanation in a simple mathematical model, then test it in three complementary ways: we (1) make, test and confirm three predictions on the effect of metric choice using the InstructGPT/GPT-3 family on tasks with claimed emergent abilities, (2) make, test and confirm two predictions about metric choices in a meta-analysis of emergent abilities on BIG-Bench; and (3) show how similar metric decisions suggest apparent emergent abilities on vision tasks in diverse deep network architectures (convolutional, autoencoder, transformers). In all three analyses, we find strong supporting evidence that emergent abilities may not be a fundamental property of scaling AI models. Read More

#singularity

Geoffrey Hinton tells us why he’s now scared of the tech he helped build

“I have suddenly switched my views on whether these things are going to be more intelligent than us.”

I met Geoffrey Hinton at his house on a pretty street in north London just four days before the bombshell announcement that he is quitting Google. Hinton is a pioneer of deep learning who helped develop some of the most important techniques at the heart of modern artificial intelligence, but after a decade at Google, he is stepping down to focus on new concerns he now has about AI.  

Stunned by the capabilities of new large language models like GPT-4, Hinton wants to raise public awareness of the serious risks that he now believes may accompany the technology he ushered in.    Read More

#chatbots, #singularity

‘Godfather of A.I.’ leaves Google after a decade to warn society of technology he’s touted

Geoffrey Hinton, known as “The Godfather of AI,” received his Ph.D. in artificial intelligence 45 years ago and has remained one of the most respected voices in the field.

For the past decade Hinton worked part-time at Google, between the company’s Silicon Valley headquarters and Toronto. But he has quit the internet giant, and he told The New York Times that he’ll be warning the world about the potential threat of AI, which he said is coming sooner than he previously thought. Read More

#singularity

We Aren’t Close To Creating A Rapidly Self-Improving AI

When discussing artificial intelligence, a popular topic is recursive self-improvement. The idea in a nutshell: once an AI figures out how to improve its own intelligence, it might be able to bootstrap itself to a god-like intellect, and become so powerful that it could wipe out humanity. This is sometimes called the AI singularity or a superintelligence explosion. Some even speculate that once an AI is sufficiently advanced to begin the bootstrapping process, it will improve far too quickly for us to react, and become unstoppably intelligent in a very short time (usually described as under a year). This is what people refer to as the fast takeoff scenario.

Recent progress in the field has led some people to fear that a fast takeoff might be around the corner. These fears have led to strong reactions; for example, a call for a moratorium on training models larger than GPT-4, in part due to fears that a larger model could spontaneously manifest self-improvement.

However, at the moment, these fears are unfounded. I argue that an AI with the ability to rapidly self-improve (i.e. one that could suddenly develop god-like abilities and threaten humanity) still requires at least one paradigm-changing breakthrough. My argument leverages an inside-view perspective on the specific ways in which progress in AI has manifested over the past decade. Read More

#singularity