Tag Archives: Singularity
Sam Altman: OpenAI CEO on GPT-4, ChatGPT, and the Future of AI
In San Francisco, some people wonder when A.I. will kill us all
- Underlying all the recent hype about AI, industry participants are engaging in furious debates about how to prepare for an AI that’s so powerful it can take control of itself.
- This idea of artificial general intelligence, or AGI, isn’t just dorm-room talk: Big name technologists like Sam Altman and Marc Andreessen talk about it, using “in” terms like “misalignment” and “the paperclip maximization problem.”
- In a San Francisco pop-up museum devoted to the topic called the Misalignment Museum, a sign reads, “Sorry for killing most of humanity.”
#singularity
Humanity May Reach Singularity Within Just 7 Years, Trend Shows
By one major metric, artificial general intelligence is much closer than you think.
- By one unique metric, we could approach technological singularity by the end of this decade, if not sooner.
- A translation company developed a metric, Time to Edit (TTE), to calculate the time it takes for professional human editors to fix AI-generated translations compared to human ones. This may help quantify the speed toward singularity.
- An AI that can translate speech as well as a human could change society.
#singularity
The danger of advanced artificial intelligence controlling its own feedback
How would an artificial intelligence (AI) decide what to do? One common approach in AI research is called “reinforcement learning”.
Reinforcement learning gives the software a “reward” defined in some way, and lets the software figure out how to maximise the reward. This approach has produced some excellent results, such as building software agents that defeat humans at games like chess and Go, or creating new designs for nuclear fusion reactors.
However, we might want to hold off on making reinforcement learning agents too flexible and effective.
As we argue in a new paper in AI Magazine, deploying a sufficiently advanced reinforcement learning agent would likely be incompatible with the continued survival of humanity. Read More
DeepMind Says It Had Nothing to Do With Research Paper Saying AI Could End Humanity
After a researcher with a position at DeepMind—the machine intelligence firm owned by Google parent Alphabet—co-authored a paper claiming that AI could feasibly wipe out humanity one day, DeepMind is distancing itself from the work.
The paper was published recently in the peer-reviewed AI Magazine, and was co-authored by researchers at Oxford University and by Marcus Hutter, an AI researcher who works at DeepMind. The first line of Hutter’s website states the following: “I am Senior Researcher at Google DeepMind in London, and Honorary Professor in the Research School of Computer Science (RSCS) at the Australian National University (ANU) in Canberra.” The paper, which currently lists his affiliation to DeepMind and ANU, runs through some thought experiments about humanity’s future with a superintelligent AI that operates using similar schemes to today’s machine learning programs, such as reward-seeking. It concluded that this scenario could erupt into a zero-sum game between humans and AI that would be “fatal” if humanity loses out. Read More
Read the Paper
The hype around DeepMind’s new AI model misses what’s actually cool about it
Earlier this month, DeepMind presented a new “generalist” AI model called Gato. The model can play Atari video games, caption images, chat, and stack blocks with a real robot arm, the Alphabet-owned AI lab announced. All in all, Gato can do 604 different tasks.
But while Gato is undeniably fascinating, in the week since its release some researchers have gotten a bit carried away.
One of DeepMind’s top researchers and a coauthor of the Gato paper, Nando de Freitas, couldn’t contain his excitement. “The game is over!” he tweeted, suggesting that there is now a clear path from Gato to artificial general intelligence, or AGI, a vague concept of human- or superhuman-level AI. …Unsurprisingly, de Freitas’s announcement triggered breathless press coverage that DeepMind is “on the verge” of human-level artificial intelligence. This is not the first time hype has outstripped reality.
…That’s a shame, because Gato is an interesting step. Some models have started to mix different skills, …DeepMind’s AlphaZero learned to play Go, chess, and shogi, …but here’s the crucial difference: AlphaZero could only learn one task at a time. After learning to play Go, it had to forget everything before learning to play chess, and so on. It could not learn to play both games at once. This is what Gato does: it learns multiple different tasks at the same time, which means it can switch between them without having to forget one skill before learning another. It’s a small advance but a significant one. Read More
A Generalist Agent
Inspired by progress in large-scale language modelling, we apply a similar approach towards building a single generalist agent beyond the realm of text outputs. The agent, which we refer to as Gato, works as a multi-modal, multi-task, multi-embodiment generalist policy. The same network with the same weights can play Atari, caption images, chat, stack blocks with a real robot arm and much more, deciding based on its context whether to output text, joint torques, button presses, or other tokens. In this report we describe the model and the data, and document the current capabilities of Gato. Read More
Paper
The singularity is very close
Within one century, biological intelligence will be a tiny minority of all sentient life. It will be very rare to be human. It will be very rare to have cells and blood and a heart. Human beings will be outnumbered a thousand to one by conscious machine intelligences.
Artificial General Intelligence (AGI)1 is about to go from being science fiction to being part of everybody’s day-to-day life. It’s also going to happen in the blink of an eye — because once it gets loose, there is no stopping it from scaling itself incredibly rapidly. Whether we want it to or not, it will impact every human being’s life.
Some people believe the singularity won’t happen for a very long time, or at all. I’d like to discuss why I am nearly certain it will happen in the next 20 years. My overall prediction is based on 3 hypotheses:
- Scale is not the solution.
- AI will design AGI.
- The ball is already rolling.
#singularity
Discontinuities And General Artificial Intelligence
…Today I want to talk about predictions of when we reach a more general version of artificial intelligence, similar to a human brain, and ask what we’ve learned. There have been a few approaches to this over the years. One that I was a big fan of was this 2015 WaitButWhy piece on the AI revolution. The argument in this piece is that AI progress is doubling and we are expecting a linear trend, but that doubling will explode the AI capabilities of machines sooner than we thought. I admit that I was a big fan of this argument, but it increasingly looks incorrect. While it is possible that this is true, and that we are still just in the early stages of the trend, it increasingly looks like the marginal gains from existing approaches to AI are decling and won’t get us to general AI.
The other big prediction about when we get there is Ray Kurzweil’s extrapolation of computing power, noting that next year, in 2023, the amount of compute you can buy for $1000 will surpass the compute available in the human brain, bringing us close go general AI. Of course, that only works if the key to AI is raw compute power. It increasingly looks like that may be wrong. Read More