Original Father of AI on Dangers! (Prof. Jürgen Schmidhuber)

Read More
#singularity, #videos

Is Artificial Intelligence Our “Oppenheimer Moment”? Mo Gawdat’s Warning To The World

Read More

#singularity, #videos

OpenAI is forming a new team to bring ‘superintelligent’ AI under control

OpenAI is forming a new team led by Ilya Sutskever, its chief scientist and one of the company’s co-founders, to develop ways to steer and control “superintelligent” AI systems.

In a blog post published today, Sutskever and Jan Leike, a lead on the alignment team at OpenAI, predict that AI with intelligence exceeding that of humans could arrive within the decade. This AI — assuming it does, indeed, arrive eventually — won’t necessarily be benevolent, necessitating research into ways to control and restrict it, Sutskever and Leike say. — Read More

#singularity

How existential risk became the biggest meme in AI — “Ghost stories are contagious.”

Who’s afraid of the big bad bots? A lot of people, it seems. The number of high-profile names that have now made public pronouncements or signed open letters warning of the catastrophic dangers of artificial intelligence is striking.

… Concerns about runaway, self-improving machines have been around since Alan Turing. Futurists like Vernor Vinge and Ray Kurzweil popularized these ideas with talk of the so-called Singularity, a hypothetical date at which artificial intelligence outstrips human intelligence and machines take over. 

But at the heart of such concerns is the question of control: How do humans stay on top if (or when) machines get smarter? — Read More

#singularity

Shall we play a game?

GREETINGS PROFESSOR FALKEN

SHALL WE PLAY A GAME?

Maybe, but AI is not an arms race (Read More). We need to ask ourselves, Is Avoiding Extinction from AI Really an Urgent Priority? The history of technology suggests that the greatest risks come not from the tech, but from the people who control it (Read More). Somehow, with AI, we leapt immediately to DEFCON 1 (Read More).

So, existential threat or marketing hype?

HOW ABOUT A NICE GAME OF CHESS?

#singularity

350+ industry leaders sign Statement on AI Risk

Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war. — View the List of Signatories

#singularity

AI and the future of humanity 

Read More

#singularity, #videos

“Godfather of AI” Geoffrey Hinton Warns of the “Existential Threat” of AI | Amanpour and Company

Read More

#singularity, #videos

Are Emergent Abilities of Large Language Models a Mirage?

Recent work claims that large language models display emergent abilities, abilities not present in smaller-scale models that are present in larger-scale models. What makes emergent abilities intriguing is two-fold: their sharpness, transitioning seemingly instantaneously from not present to present, and their unpredictability, appearing at seemingly unforeseeable model scales. Here, we present an alternative explanation for emergent abilities: that for a particular task and model family, when analyzing fixed model outputs, one can choose a metric which leads to the inference of an emergent ability or another metric which does not. Thus, our alternative suggests that existing claims of emergent abilities are creations of the researcher’s analyses, not fundamental changes in model behavior on specific tasks with scale. We present our explanation in a simple mathematical model, then test it in three complementary ways: we (1) make, test and confirm three predictions on the effect of metric choice using the InstructGPT/GPT-3 family on tasks with claimed emergent abilities, (2) make, test and confirm two predictions about metric choices in a meta-analysis of emergent abilities on BIG-Bench; and (3) show how similar metric decisions suggest apparent emergent abilities on vision tasks in diverse deep network architectures (convolutional, autoencoder, transformers). In all three analyses, we find strong supporting evidence that emergent abilities may not be a fundamental property of scaling AI models. Read More

#singularity

Geoffrey Hinton tells us why he’s now scared of the tech he helped build

“I have suddenly switched my views on whether these things are going to be more intelligent than us.”

I met Geoffrey Hinton at his house on a pretty street in north London just four days before the bombshell announcement that he is quitting Google. Hinton is a pioneer of deep learning who helped develop some of the most important techniques at the heart of modern artificial intelligence, but after a decade at Google, he is stepping down to focus on new concerns he now has about AI.  

Stunned by the capabilities of new large language models like GPT-4, Hinton wants to raise public awareness of the serious risks that he now believes may accompany the technology he ushered in.    Read More

#chatbots, #singularity