Is the text you’re reading right now written by AI? The Turing Test is nothing more than answering this simple question. [1] Read More
Tag Archives: Human
Are we close to achieving Artificial General Intelligence?
In the summer of 1956, AI pioneers John McCarthy, Marvin Minsky, Nat Rochester, and Claude Shannon wrote: “The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it. An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves.” They figured this would take 10 people two months.
Fast-forward to 1970 and they went again : “In from three to eight years, we will have a machine with the general intelligence of an average human being. I mean a machine that will be able to read Shakespeare, grease a car, play office politics, tell a joke, have a fight. At that point the machine will begin to educate itself with fantastic speed. In a few months it will be at genius level, and a few months after that, its powers will be incalculable.” Read More
Google suspends engineer who claims its AI is sentient
Google has placed one of its engineers on paid administrative leave for allegedly breaking its confidentiality policies after he grew concerned that an AI chatbot system had achieved sentience, the Washington Post reports. The engineer, Blake Lemoine, works for Google’s Responsible AI organization, and was testing whether its LaMDA model generates discriminatory language or hate speech.
The engineer’s concerns reportedly grew out of convincing responses he saw the AI system generating about its rights and the ethics of robotics. In April he shared a document with executives titled “Is LaMDA Sentient?” containing a transcript of his conversations with the AI (after being placed on leave, Lemoine published the transcript via his Medium account), which he says shows it arguing “that it is sentient because it has feelings, emotions and subjective experience.” Read More
New DNA Repair Mechanism Holds Promise for Precision Cancer Therapies
Scientists at the University of Birmingham and the Francis Crick Institute have discovered a new way in which cancer cells repair double-stranded breaks in DNA.
The findings were published in a paper titled,”H3K4 methylation by SETD1A/BOD1L facilitates RIF1-dependent NHEJ,” in the journal Molecular Cell on May 19, 2022. The work sheds light on how cancer cells respond to chemotherapy and radiotherapy, including how cancer cells may develop resistance to treatment. These insights could help develop precision medicine approaches for cancer patients. Read More
AI Inventing Its Own Culture, Passing It On to Humans, Sociologists Find
Algorithms could increasingly influence human culture, even though we don’t have a good understanding of how they interact with us or each other.
A new study shows that humans can learn new things from artificial intelligence systems and pass them to other humans, in ways that could potentially influence wider human culture.
The study, published on Monday by a group of researchers at the Center for Human and Machines at the Max Planck Institute for Human Development, suggests that while humans can learn from algorithms how to better solve certain problems, human biases prevented performance improvements from lasting as long as expected. Humans tended to prefer solutions from other humans over those proposed by algorithms, because they were more intuitive, or were less costly upfront—even if they paid off more, later. Read More
DeepMind researcher claims new ‘Gato’ AI could lead to AGI, says ‘the game is over!’
According to Doctor Nando de Freitas, a lead researcher at Google’s DeepMind, humanity is apparently on the verge of solving artificial general intelligence (AGI) within our lifetimes.
In response to an opinion piece penned by yours truly, the scientist posted a thread on Twitter that began with what’s perhaps the boldest statement we’ve seen from anyone at DeepMind concerning its current progress toward AGI:
My opinion: It’s all about scale now! The Game is Over! Read More
A Generalist Agent
Inspired by progress in large-scale language modelling, we apply a similar approach towards building a single generalist agent beyond the realm of text outputs. The agent, which we refer to as Gato, works as a multi-modal, multi-task, multi-embodiment generalist policy. The same network with the same weights can play Atari, caption images, chat, stack blocks with a real robot arm and much more, deciding based on its context whether to output text, joint torques, button presses, or other tokens. In this report we describe the model and the data, and document the current capabilities of Gato. Read More
Paper
Artificial intelligence beats eight world champions at bridge
Victory marks milestone for AI as bridge requires more human skills than other strategy games
An artificial intelligence has beaten eight world champions at bridge, a game in which human supremacy has resisted the march of the machines until now.
The victory represents a new milestone for AI because in bridge players work with incomplete information and must react to the behaviour of several other players – a scenario far closer to human decision-making. Read More
The Puzzling Reason AI May Never Compete With Human Consciousness
Two immersive thought experiments lead us right into a flurry of questions surrounding the human mind. You decide where you stand.
Constructing humanlike artificial intelligence often starts with deconstructing humans. Take fingerprints: When holding soapy dishes, we intuitively adjust our grip based on our fingerprint structure. It just doesn’t cross our mind, because we chalk it up to reflex – and for the longest time, so did scientists. No one had any equations to unravel how this works because, well, it didn’t matter much. But the rise of robotics has complicated things.
For a robot to do this, we have to figure out precisely what’s going on, and even turn that knowledge into writable code. Now decoding fingerprint grip matters, and researchers are finally trying to find a new law of physics to explain it. Read More
OpenAI’s Chief Scientist Claimed AI May Be Conscious — and Kicked Off a Furious Debate
A month ago, Ilya Sutskever tweeted that large neural networks may be “slightly conscious.” He’s one of the co-founders and Chief Scientist of OpenAI, and also co-authored the landmark paper that sparked the deep learning revolution. Having such titles under his name, he certainly knew his bold claim — accompanied by neither evidence nor an explanation — would attract the attention of the AI community, cognitive scientists, and philosophy lovers alike. In a matter of days, the Tweet got more than 400 responses and twice that number of retweets.
People in AI’s vanguard circles like to ponder about the future of AI: When will we achieve artificial general intelligence (AGI)? What are the capabilities and limitations of large transformer-based systems like GPT-3 and super-human reinforcement learning models like AlphaZero? When — if ever — will AI develop consciousness? Read More