Sight is the sense on which humans chiefly rely to navigate the world, but sound might be just as important — it’s been shown that people can learn to follow subtle cues in the volume, direction, and speed of audio signals. Inspired by this, scientists at the University of Eastern Finland recently proposed in a preprint paper (“Do Autonomous Agents Benefit from Hearing?“) an AI system that complements visual data with sound. Preliminary results, they say, indicate that the approach improves agents’ ability to complete goals in a 3D maze.
“Learning using only visual information may not always be easy for the learning agent,” wrote the coauthors. “For example, it is difficult for the agent to reach the target using only visual information in scenarios where there are many rooms and there is no direct line of sight between the agent and the target … Thus, the use of audio features could provide valuable information for such problems.” Read More
Monthly Archives: May 2019
Do Autonomous Agents Benefit from Hearing?
Mapping states to actions in deep reinforcement learning is mainly based on visual information. The commonly used approach for dealing with visual information is to extract pixels from images and use them as state representation for reinforcement learning agent. But, any vision only agent is handicapped by not being able to sense audible cues. Using hearing, animals are able to sense targets that are outside of their visual range. In this work, we propose the use of audio as complementary information to visual only in state representation. We assess the impact of such multi-modal setup in reach-the-goal tasks in ViZDoom environment. Results show that the agent improves its behaviour when visual information is accompanied with audio features. Read More
Risks From General Artificial Intelligence Without an Intelligence Explosion
Artificial intelligence systems we have today can be referred to as narrow AI – they perform well at specific tasks, like playing chess or Jeopardy, and some classes of problems like Atari games. Many experts predict that general AI, which would be able to perform most tasks humans can, will be developed later this century, with median estimates around 2050. When people talk about long term existential risk from the development of general AI, they commonly refer to the intelligence explosion (IE) scenario. AI risk skeptics oftenargue against AI safety concerns along the lines of “Intelligence explosion sounds like science-fiction and seems really unlikely, therefore there’s not much to worry about”. It’s unfortunate when AI safety concerns are rounded down to worries about IE. Unlike I. J. Good, I do not consider this scenario inevitable (though relatively likely), and I would expect general AI to present an existential risk even if I knew for sure that intelligence explosion were impossible. Read More
Humans are hooked
AI develops human-like number sense – taking us a step closer to building machines with general intelligence
Numbers figure pretty high up on the list of what a computer can do well. While humans often struggle to split a restaurant bill, a modern computer can make millions of calculations in a mere second. Humans, however, have an innate and intuitive number sense that helped us, among other things, to build computers in the first place.
Unlike a computer, a human knows when looking at four cats, four apples and the symbol 4 that they all have one thing in common – the abstract concept of “four” – without even having to count them. This illustrates the difference between the human mind and the machine, and helps explain why we are not even close to developing AIs with the broad intelligence that humans possess. But now a new study, published in Science Advances, reports that an AI has spontaneously developed a human-like number sense. Read More
At Stitch Fix, data scientists and A.I. become personal stylists
With Stitch Fix, users don’t go shopping for their clothes. Professional stylists do the job for them and the personal shopping service ships the new clothes to their door.
The stylists aren’t working on their own, though; they’re using artificial intelligence (A.I.) and a team of about 60 data scientists.
That combo is behind the success at Stitch Fix, a San Francisco-based online subscription and shopping service founded in 2011. Read More
The Netflix Recommender System: Algorithms, Business Value, and Innovation
Storytelling has always been at the core of human nature. Major technological breakthroughs that changed society in fundamental ways have also allowed for richer and more engaging stories to be told. It is not hard to imagine our ancestors gathering around a fire in a cave and enjoying stories that were made richer by supporting cave paintings. Writing, and later the printing press, led to more varied and richer stories that were distributed more widely than ever before. More recently, television led to an explosion in the use and distribution of video for storytelling. Today, all of us are lucky to be witnessing the changes brought about by the Internet. Like previous major technological breakthroughs, the Internet is also having a profound impact on storytelling.
Netflix lies at the intersection of the Internet and storytelling. We are inventing Internet television. Our main product and source of revenue is a subscription service that allows members to stream any video in our collection of movies and TV shows at any time on a wide range of Internet-connected devices. As of this writing, we have more than 65 million members who stream more than 100 million hours of movies and TV shows per day.
The Internet television space is young and competition is ripe, thus innovation is crucial. A key pillar of our product is the recommender system that helps our members find videos to watch in every session. Our recommender system is not one algorithm, but rather a collection of different algorithms serving different use cases that come together to create the complete Netflix experience. We give an overview of the various algorithms in our recommender system in Section 2, and discuss their business value in Section 3. We describe the process that we use to improve our algorithms in Section 4, review some of our key open problems in Section 5, and present our conclusions in Section 6. Read More (click on the PDF symbol)
Artificial Intelligence May Not 'Hallucinate' After All
THANKS TO ADVANCES in machine learning, computers have gotten really good at identifying what’s in photographs. They started beating humans at the task years ago, and can now even generate fake images that look eerily real. While the technology has come a long way, it’s still not entirely foolproof. In particular, researchers have found that image detection algorithms remain susceptible to a class of problems called adversarial examples.
Adversarial examples are like optical (or audio) illusions for AI. By altering a handful of pixels, a computer scientist can fool a machine learning classifier into thinking, say, a picture of a rifle is actually one of a helicopter. But to you or me, the image still would look like a gun—it almost seems like the algorithm is hallucinating. As image recognition technology is used in more places, adversarial examples may present a troubling security risk. Experts have shown they can be used to do things like cause a self-driving car to ignore a stop sign, or make a facial recognition system falsely identify someone. Read More
Adversarial Examples Are Not Bugs, They Are Features
Adversarial examples have attracted significant attention in machine learning, but the reasons for their existence and pervasiveness remain unclear. We demonstrate that adversarial examples can be directly attributed to the presence of non-robust features: features derived from patterns in the data distribution that are highly predictive, yet brittle and incomprehensible to humans. After capturing these features within a theoretical framework, we establish their widespread existence in standard datasets. Finally, we present a simple setting where we can rigorously tie the phenomena we observe in practice to a misalignment between the (human-specified) notion of robustness and the inherent geometry of the data. Read More
The Race for Artificial Intelligence: China vs. America
Let’s be clear, Artificial Intelligence, in particular in its latest development, deep learning that mimics the way the human mind works, first emerged in America. This gave the U.S. a huge head start over the rest of the world – including China, putting the U.S. firmly in the lead of the race for AI.
What Americans didn’t develop at home, they bought from Europe. In this respect, two British firms stand out with groundbreaking contributions to AI development: ARM and DeepMind.
While all eyes are trained on the AI race between China and America, is there a role left for Europe?
From the start of the digital revolution, and in spite of America’s lead, Europe has always had a fundamental role in digital research, a role often overlooked and even downplayed by the media mesmerized by Silicon Valley fireworks.
But the fireworks are dying down and getting messy now while China is on the rise. Read More
