‘Godfather of A.I.’ leaves Google after a decade to warn society of technology he’s touted

Geoffrey Hinton, known as “The Godfather of AI,” received his Ph.D. in artificial intelligence 45 years ago and has remained one of the most respected voices in the field.

For the past decade Hinton worked part-time at Google, between the company’s Silicon Valley headquarters and Toronto. But he has quit the internet giant, and he told The New York Times that he’ll be warning the world about the potential threat of AI, which he said is coming sooner than he previously thought. Read More

#singularity

We Aren’t Close To Creating A Rapidly Self-Improving AI

When discussing artificial intelligence, a popular topic is recursive self-improvement. The idea in a nutshell: once an AI figures out how to improve its own intelligence, it might be able to bootstrap itself to a god-like intellect, and become so powerful that it could wipe out humanity. This is sometimes called the AI singularity or a superintelligence explosion. Some even speculate that once an AI is sufficiently advanced to begin the bootstrapping process, it will improve far too quickly for us to react, and become unstoppably intelligent in a very short time (usually described as under a year). This is what people refer to as the fast takeoff scenario.

Recent progress in the field has led some people to fear that a fast takeoff might be around the corner. These fears have led to strong reactions; for example, a call for a moratorium on training models larger than GPT-4, in part due to fears that a larger model could spontaneously manifest self-improvement.

However, at the moment, these fears are unfounded. I argue that an AI with the ability to rapidly self-improve (i.e. one that could suddenly develop god-like abilities and threaten humanity) still requires at least one paradigm-changing breakthrough. My argument leverages an inside-view perspective on the specific ways in which progress in AI has manifested over the past decade. Read More

#singularity

ChaosGPT: Empowering GPT with Internet and Memory to Destroy Humanity

Read More
#singularity, #videos

Sam Altman: OpenAI CEO on GPT-4, ChatGPT, and the Future of AI

Read More
#singularity, #videos

In San Francisco, some people wonder when A.I. will kill us all

  • Underlying all the recent hype about AI, industry participants are engaging in furious debates about how to prepare for an AI that’s so powerful it can take control of itself.
  • This idea of artificial general intelligence, or AGI, isn’t just dorm-room talk: Big name technologists like Sam Altman and Marc Andreessen talk about it, using “in” terms like “misalignment” and “the paperclip maximization problem.”
  • In a San Francisco pop-up museum devoted to the topic called the Misalignment Museum, a sign reads, “Sorry for killing most of humanity.”
Read More

#singularity

Humanity May Reach Singularity Within Just 7 Years, Trend Shows

By one major metric, artificial general intelligence is much closer than you think.

  • By one unique metric, we could approach technological singularity by the end of this decade, if not sooner.
  • A translation company developed a metric, Time to Edit (TTE), to calculate the time it takes for professional human editors to fix AI-generated translations compared to human ones. This may help quantify the speed toward singularity.
  • An AI that can translate speech as well as a human could change society.
Read More

#singularity

The danger of advanced artificial intelligence controlling its own feedback

How would an artificial intelligence (AI) decide what to do? One common approach in AI research is called “reinforcement learning”.

Reinforcement learning gives the software a “reward” defined in some way, and lets the software figure out how to maximise the reward. This approach has produced some excellent results, such as building software agents that defeat humans at games like chess and Go, or creating new designs for nuclear fusion reactors.

However, we might want to hold off on making reinforcement learning agents too flexible and effective.

As we argue in a new paper in AI Magazine, deploying a sufficiently advanced reinforcement learning agent would likely be incompatible with the continued survival of humanity. Read More

#reinforcement-learning, #singularity

DeepMind Says It Had Nothing to Do With Research Paper Saying AI Could End Humanity

After a researcher with a position at DeepMind—the machine intelligence firm owned by Google parent Alphabet—co-authored a paper claiming that AI could feasibly wipe out humanity one day, DeepMind is distancing itself from the work. 

The paper was published recently in the peer-reviewed AI Magazine, and was co-authored by researchers at Oxford University and by Marcus Hutter, an AI researcher who works at DeepMind. The first line of Hutter’s website states the following: “I am Senior Researcher at Google DeepMind in London, and Honorary Professor in the Research School of Computer Science (RSCS) at the Australian National University (ANU) in Canberra.” The paper, which currently lists his affiliation to DeepMind and ANU, runs through some thought experiments about humanity’s future with a superintelligent AI that operates using similar schemes to today’s machine learning programs, such as reward-seeking. It concluded that this scenario could erupt into a zero-sum game between humans and AI that would be “fatal” if humanity loses out.  Read More

Read the Paper

#singularity

The hype around DeepMind’s new AI model misses what’s actually cool about it

Earlier this month, DeepMind presented a new “generalist” AI model called Gato. The model can play Atari video games, caption images, chat, and stack blocks with a real robot arm, the Alphabet-owned AI lab announced. All in all, Gato can do 604 different tasks. 

But while Gato is undeniably fascinating, in the week since its release some researchers have gotten a bit carried away.

One of DeepMind’s top researchers and a coauthor of the Gato paper, Nando de Freitas, couldn’t contain his excitement. “The game is over!” he tweeted, suggesting that there is now a clear path from Gato to artificial general intelligence, or AGI, a vague concept of human- or superhuman-level AI. …Unsurprisingly, de Freitas’s announcement triggered breathless press coverage that DeepMind is “on the verge” of human-level artificial intelligence. This is not the first time hype has outstripped reality.

…That’s a shame, because Gato is an interesting step. Some models have started to mix different skills, …DeepMind’s AlphaZero learned to play Go, chess, and shogi, …but here’s the crucial difference: AlphaZero could only learn one task at a time. After learning to play Go, it had to forget everything before learning to play chess, and so on. It could not learn to play both games at once. This is what Gato does: it learns multiple different tasks at the same time, which means it can switch between them without having to forget one skill before learning another. It’s a small advance but a significant one. Read More

#singularity

A Generalist Agent

Inspired by progress in large-scale language modelling, we apply a similar approach towards building a single generalist agent beyond the realm of text outputs. The agent, which we refer to as Gato, works as a multi-modal, multi-task, multi-embodiment generalist policy. The same network with the same weights can play Atari, caption images, chat, stack blocks with a real robot arm and much more, deciding based on its context whether to output text, joint torques, button presses, or other tokens. In this report we describe the model and the data, and document the current capabilities of Gato. Read More

Paper

#human, #singularity