With Stitch Fix, users don’t go shopping for their clothes. Professional stylists do the job for them and the personal shopping service ships the new clothes to their door.
The stylists aren’t working on their own, though; they’re using artificial intelligence (A.I.) and a team of about 60 data scientists.
That combo is behind the success at Stitch Fix, a San Francisco-based online subscription and shopping service founded in 2011. Read More
Daily Archives: May 9, 2019
The Netflix Recommender System: Algorithms, Business Value, and Innovation
Storytelling has always been at the core of human nature. Major technological breakthroughs that changed society in fundamental ways have also allowed for richer and more engaging stories to be told. It is not hard to imagine our ancestors gathering around a fire in a cave and enjoying stories that were made richer by supporting cave paintings. Writing, and later the printing press, led to more varied and richer stories that were distributed more widely than ever before. More recently, television led to an explosion in the use and distribution of video for storytelling. Today, all of us are lucky to be witnessing the changes brought about by the Internet. Like previous major technological breakthroughs, the Internet is also having a profound impact on storytelling.
Netflix lies at the intersection of the Internet and storytelling. We are inventing Internet television. Our main product and source of revenue is a subscription service that allows members to stream any video in our collection of movies and TV shows at any time on a wide range of Internet-connected devices. As of this writing, we have more than 65 million members who stream more than 100 million hours of movies and TV shows per day.
The Internet television space is young and competition is ripe, thus innovation is crucial. A key pillar of our product is the recommender system that helps our members find videos to watch in every session. Our recommender system is not one algorithm, but rather a collection of different algorithms serving different use cases that come together to create the complete Netflix experience. We give an overview of the various algorithms in our recommender system in Section 2, and discuss their business value in Section 3. We describe the process that we use to improve our algorithms in Section 4, review some of our key open problems in Section 5, and present our conclusions in Section 6. Read More (click on the PDF symbol)
Artificial Intelligence May Not 'Hallucinate' After All
THANKS TO ADVANCES in machine learning, computers have gotten really good at identifying what’s in photographs. They started beating humans at the task years ago, and can now even generate fake images that look eerily real. While the technology has come a long way, it’s still not entirely foolproof. In particular, researchers have found that image detection algorithms remain susceptible to a class of problems called adversarial examples.
Adversarial examples are like optical (or audio) illusions for AI. By altering a handful of pixels, a computer scientist can fool a machine learning classifier into thinking, say, a picture of a rifle is actually one of a helicopter. But to you or me, the image still would look like a gun—it almost seems like the algorithm is hallucinating. As image recognition technology is used in more places, adversarial examples may present a troubling security risk. Experts have shown they can be used to do things like cause a self-driving car to ignore a stop sign, or make a facial recognition system falsely identify someone. Read More
Adversarial Examples Are Not Bugs, They Are Features
Adversarial examples have attracted significant attention in machine learning, but the reasons for their existence and pervasiveness remain unclear. We demonstrate that adversarial examples can be directly attributed to the presence of non-robust features: features derived from patterns in the data distribution that are highly predictive, yet brittle and incomprehensible to humans. After capturing these features within a theoretical framework, we establish their widespread existence in standard datasets. Finally, we present a simple setting where we can rigorously tie the phenomena we observe in practice to a misalignment between the (human-specified) notion of robustness and the inherent geometry of the data. Read More
The Race for Artificial Intelligence: China vs. America
Let’s be clear, Artificial Intelligence, in particular in its latest development, deep learning that mimics the way the human mind works, first emerged in America. This gave the U.S. a huge head start over the rest of the world – including China, putting the U.S. firmly in the lead of the race for AI.
What Americans didn’t develop at home, they bought from Europe. In this respect, two British firms stand out with groundbreaking contributions to AI development: ARM and DeepMind.
While all eyes are trained on the AI race between China and America, is there a role left for Europe?
From the start of the digital revolution, and in spite of America’s lead, Europe has always had a fundamental role in digital research, a role often overlooked and even downplayed by the media mesmerized by Silicon Valley fireworks.
But the fireworks are dying down and getting messy now while China is on the rise. Read More
Russian upgraded Su-25 attack aircraft to get sighting system with artificial intelligence
MOSCOW, May 8. /TASS/. Russia’s upgraded Su-25SM3 attack aircraft will get an onboard target acquisition and sighting system with artificial intelligence elements to allow pilots to strike designated targets actually without their participation, a source in the defense industry told TASS on Wednesday.
“As part of further upgrade of attack aircraft, the latest Su-25SM3 versions will be furnished with a new sighting system. It will be fully automated and a pilot will only have to select a target on the screen and all the rest will be done by artificial intelligence,” the source said.
The target acquisition system with artificial intelligence will be able to independently identify hostile targets, keep them in sight and guide missiles. The new technology has been integrated into the unified troop command and control system, which allows mapping an optimal route towards the target and the trajectory of using weapons. Upgraded attack aircraft will also be able to receive data on targets from external sources through the command and control system. Read More
Artificial Intelligence: A Cybersecurity Solution or the Greatest Risk of All?
Artificial intelligence has, in recent years, developed rapidly, serving as the basis for numerous mainstream applications. From digital assistants to healthcare and from manufacturing to education, AI is widely considered a powerhouse that has yet to unleash its full potential. But in the face of rising cybercrime rates, one question seems especially pertinent: is AI a solution for cybersecurity, or just another threat? Read More
Artificial Intelligence (AI) Solutions on Edge Devices
Artificial Intelligence (AI) Solutions, particularly those based on Deep Learning in the areas of Computer Vision, are done in a cloud-based environment requiring heavy computing capacity.
Inference is a relatively lower compute-intensive task than training, where latency is of greater importance for providing real-time results on a model. Most inference is still performed in the cloud or on a server, but as the diversity of AI applications grows, the centralized training and inference paradigm is coming into question.
It is possible, and becoming easier, to run AI and Machine Learning with analytics at the Edge today, depending on the size and scale of the Edge site and the particular system being used. While Edge site computing systems are much smaller than those found in central data centers, they have matured, and now successfully run many workloads due to an immense growth in the processing power of today’s x86 commodity servers. It’s quite amazing how many workloads can now run successfully at the Edge. Read More