Sight is the sense on which humans chiefly rely to navigate the world, but sound might be just as important — it’s been shown that people can learn to follow subtle cues in the volume, direction, and speed of audio signals. Inspired by this, scientists at the University of Eastern Finland recently proposed in a preprint paper (“Do Autonomous Agents Benefit from Hearing?“) an AI system that complements visual data with sound. Preliminary results, they say, indicate that the approach improves agents’ ability to complete goals in a 3D maze.
“Learning using only visual information may not always be easy for the learning agent,” wrote the coauthors. “For example, it is difficult for the agent to reach the target using only visual information in scenarios where there are many rooms and there is no direct line of sight between the agent and the target … Thus, the use of audio features could provide valuable information for such problems.” Read More
Daily Archives: May 15, 2019
Do Autonomous Agents Benefit from Hearing?
Mapping states to actions in deep reinforcement learning is mainly based on visual information. The commonly used approach for dealing with visual information is to extract pixels from images and use them as state representation for reinforcement learning agent. But, any vision only agent is handicapped by not being able to sense audible cues. Using hearing, animals are able to sense targets that are outside of their visual range. In this work, we propose the use of audio as complementary information to visual only in state representation. We assess the impact of such multi-modal setup in reach-the-goal tasks in ViZDoom environment. Results show that the agent improves its behaviour when visual information is accompanied with audio features. Read More
Risks From General Artificial Intelligence Without an Intelligence Explosion
Artificial intelligence systems we have today can be referred to as narrow AI – they perform well at specific tasks, like playing chess or Jeopardy, and some classes of problems like Atari games. Many experts predict that general AI, which would be able to perform most tasks humans can, will be developed later this century, with median estimates around 2050. When people talk about long term existential risk from the development of general AI, they commonly refer to the intelligence explosion (IE) scenario. AI risk skeptics oftenargue against AI safety concerns along the lines of “Intelligence explosion sounds like science-fiction and seems really unlikely, therefore there’s not much to worry about”. It’s unfortunate when AI safety concerns are rounded down to worries about IE. Unlike I. J. Good, I do not consider this scenario inevitable (though relatively likely), and I would expect general AI to present an existential risk even if I knew for sure that intelligence explosion were impossible. Read More