Data is the raw material that fuels artificial intelligence and machine learning initiatives, but it actually can’t be that raw. It needs to be as accurate, timely and well-vetted as possible — or else AI will deliver erroneous or biased results. At this stage, most enterprises haven’t quite locked down the viability of the data employed within their AI efforts. Read More
Monthly Archives: March 2020
AI adoption in the enterprise 2020
Last year, when we felt interest in artificial intelligence (AI) was approaching a fever pitch, we created a survey to ask about AI adoption. When we analyzed the results, we determined the AI space was in a state of rapid change, so we eagerly commissioned a follow-up survey to help find out where AI stands right now. The new survey, which ran for a few weeks in December 2019, generated an enthusiastic 1,388 responses. The update sheds light on what AI adoption looks like in the enterprise— hint: deployments are shifting from prototype to production—the popularity of specific techniques and tools, the challenges experienced by adopters, and so on. There’s a lot to bite into here, so let’s get started. Read More
AI-washing: is it machine learning … or worse?
There are widespread misconceptions about Artificial Intelligence (AI), including its powers and what it can and can’t do. Which means that potential users may have unrealistic expectations of what they will see when they’re presented with AI. For some, it conjures up images of robots, while others expect it to be able to solve all problems and automatically understand numerous manual processes and tasks without training or configuration. The truth is that while AI has advanced exponentially over the last decade, it is still really in its infancy.
The gap between reality and marketing is exacerbated by the existence of some business technology solutions claiming to have artificial intelligence capabilities when they don’t. This is the world of ‘AI-washing’ which is becoming something of a phenomenon with some so-called AI solutions merely being data aggregators. Read More
Rossum’s Universal Robots – Karel Čapek — The World Reimagined in 1920
Non-Adversarial Video Synthesis with Learned Priors
Most of the existing works in video synthesis focus on generating videos using adversarial learning. Despite their success, these methods often require input reference frame or fail to generate diverse videos from the given data distribution, with little to no uniformity in the quality of videos that can be generated. Different from these methods, we focus on the problem of generating videos from latent noise vectors, without any reference input frames. To this end, we develop a novel approach that jointly optimizes the input latent space, the weights of a recurrent neural network and a generator through non-adversarial learning. Optimizing for the input latent space along with the network weights allows us to generate videos in a controlled environment, i.e., we can faithfully generate all videos the model has seen during the learning process as well as new unseen videos. Extensive experiments on three challenging and diverse datasets well demonstrate that our approach generates superior quality videos compared to the existing stateof-the-art methods. Read More
Code
Algorithm and Blues
The Latest and Greatest AI-Enabled Deepfake Takes us ‘Back to the Future’
With well over 6 million views since its mid-February release, YouTuber EZRyderX47’s Back to the Future deepfake video, with Robert Downey Jr. and Tom Holland seamlessly replacing Christopher Lloyd and Michael J. Fox, has become quite the viral sensation. The video is brilliantly done, from the lip-sync to the anything but uncanny eyes; the choice of films, and clip, was inspired as well, a welcome window into a new riff on a Hollywood classic. Produced using two readily available pieces of free software – HitFilm Express, from FXhome, and Deepfacelab – the startingly believable piece instantly conjures up all sorts of notions, both wonderful and sinister, regarding the seemingly unlimited horizons of AI-enhanced digital technology. If today’s visual magicians can create any image with stunning photoreal clarity, what, dare we ask, can propogandists, criminals and other “bad” actors do with the same digital tools? Read More
#fake, #videosStanza: A Python Natural Language Processing Toolkit for Many Human Languages
We introduce Stanza, an open-source Python natural language processing toolkit supporting 66 human languages. Compared to existing widely used toolkits, Stanza features a language-agnostic fully neural pipeline for text analysis, including tokenization, multiword token expansion, lemmatization, part-of speech and morphological feature tagging, dependency parsing, and named entity recognition. We have trained Stanza on a total of 112 datasets, including the Universal Dependencies treebanks and other multilingual corpora, and show that the same neural architecture
generalizes well and achieves competitive performance on all languages tested. Additionally, Stanza includes a native Python interface to the widely used Java Stanford CoreNLP software, which further extends its functionalities to cover other tasks such as coreference resolution and relation extraction. Read More
Code
Moscow uses facial recognition network to maintain quarantine
A vast and contentious network of facial recognition cameras keeping watch over Moscow is now playing a key role in the battle against the spread of the coronavirus in Russia.
The city rolled out the technology just before the epidemic reached Russia, ignoring protests and legal complaints over sophisticated state surveillance. Read More
Massively Scaling Reinforcement Learning with SEED RL
Reinforcement learning (RL) has seen impressive advances over the last few years as demonstrated by the recent success in solving games such as Go and Dota 2. Models, or agents, learn by exploring an environment, such as a game, while optimizing for specified goals. However, current RL techniques require increasingly large amounts of training to successfully learn even simple games, which makes iterating research and product ideas computationally expensive and time consuming. Read More
Code