Artificial intelligence researchers have not been successful in giving intelligent agents the common-sense knowledge they need to reason about the world. Without this knowledge, it is impossible for intelligent agents to truly interact with the world. Traditionally, there have been two unsuccessful approaches to getting computers to reason about the world—symbolic logic and deep learning. A new project, called COMET, tries to bring these two approaches together. Although it has not yet succeeded, it offers the possibility of progress. Read More
Monthly Archives: October 2020
Creating Next-Gen Video Game AI With Reinforcement Learning
Learn how reinforcement learning is being used to upend traditional methods of creating video game AI
Reinforcement learning stands to become the new gold standard in creating intelligent video game AI. The chief advantage of reinforcement learning(RL) over traditional game AI methods is that, rather than hand-crafting the AI’s logic using complicated behavior trees, with RL one simply rewards the behavior they wish the AI to manifest and the agent learns by itself to perform the necessary sequence of actions to achieve the desired behavior. In essence, this is how one might teach a dog to perform tricks using a food reward.
The RL approach to game AI can be used to train a variety of strategic behaviors, including path finding, NPC attack and defense, and almost every behavior a human is capable of exhibiting while playing a video game. Read More
How Google Is Using AI & ML To Improve Search Experience
Recently, the developers at Google detailed the methods and ways they have been using artificial intelligence and machine learning in order to improve its search experience. The announcements were made during the Search On 2020 event, where the tech giant unveiled several enhancements in AI that will help to get search results in the coming years. Read More
#nlp, #big7Facebook’s new polyglot AI can translate between 100 languages
The model, a culmination of various automated and machine learning techniques, is being open-sourced to the research community.
Facebook is open-sourcing a new AI language model called M2M-100 that can translate between any pair among 100 languages. Of the 4,450 possible language combinations, it translates 1,100 of them directly. This is in contrast to previous multilingual models, which heavily rely on English as an intermediate. A Chinese to French translation, for example, typically passes from Chinese to English and then English to French, which increases the chance of introducing errors. Read More
How can Startups Make Machine Learning Models Production-Ready?
Today, every technology startup needs to embrace AI and machine learning models to stay relevant in their business. Machine learning (ML), if implemented well, can have a direct impact on a company’s ability to succeed and raise the next round of funding. However, the path to implementing ML solutions comes with some specific hurdles for start-ups.
Let’s discuss the top considerations for getting ML models production-ready and the best approaches for a startup. Read More
The Globe and Mail’s Sophi Wins Best Digital News Start-Up
The Globe and Mail’s automation and predictive paywall engine, Sophi.io, won WAN-IFRA’s North American Digital Media Award in the category of Best Digital News Start-Up.
… Sophi Automation autonomously places 99% of the content on all of The Globe and Mail’s digital pages, including its homepage and section pages. This lets the newsroom focus on producing the finest journalism possible and has been so successful that it is now being used for print laydown as well. Read More
Detecting Deep-Fake Videos from Phoneme-Viseme Mismatches
Recent advances in machine learning and computer graphics have made it easier to convincingly manipulate video and audio. These so-called deep-fake videos range from complete full-face synthesis and replacement (face-swap), to complete mouth and audio synthesis and replacement (lip-sync), and partial word-based audio and mouth synthesis and replacement. Detection of deep fakes with only a small spatial and temporal manipulation is particularly challenging. We describe a technique to detect such manipulated videos by exploiting the fact that the dynamics of the mouth shape – visemes – are occasionally inconsistent with a spoken phoneme. We focus on the visemes associated with words having the sound M(mama), B(baba), or P(papa) in which the mouth must completely close in order to pronounce these phonemes. We observe that this is not the case in many deep-fake videos. Such phoneme-viseme mismatches can, therefore, be used to detect even spatially small and temporally localized manipulations. We demonstrate the efficacy and robustness of this approach to detect different types of deep-fake videos, including in-the-wild deep fakes. Read More
Facebook’s open source M2M-100 model can translate between 100 different languages
Facebook today open-sourced M2M-100, an algorithm it claims is the first capable of translating between any pair of 100 languages without relying on English data. The machine learning model, which was trained on 2,200 language pairs, ostensibly outperforms English-centric systems on a metric commonly used to evaluate machine translation performance. Read More
In a battle of AI versus AI, researchers are preparing for the coming wave of deepfake propaganda
… Deepfake detection as a field of research was begun a little over three years ago. Early work focused on detecting visible problems in the videos, such as deepfakes that didn’t blink. With time, however, the fakes have gotten better at mimicking real videos and become harder to spot for both people and detection tools.
There are two major categories of deepfake detection research. The first involves looking at the behavior of people in the videos. … Other researchers, including our team, have been focused on differences that all deepfakes have compared to real videos. Read More
Lidar used to cost $75,000—here’s how Apple brought it to the iPhone
How Apple made affordable lidar with no moving parts for the iPhone.
At Tuesday’s unveiling of the iPhone 12, Apple touted the capabilities of its new lidar sensor. Apple says lidar will enhance the iPhone’s camera by allowing more rapid focus, especially in low-light situations. And it may enable the creation of a new generation of sophisticated augmented reality apps. Read More