Daily Archives: October 20, 2022
No Language Left Behind: Scaling Human-Centered Machine Translation
Driven by the goal of eradicating language barriers on a global scale, machine translation has solidified itself as a key focus of artificial intelligence research today. However, such efforts have coalesced around a small subset of languages, leaving behind the vast majority of mostly low-resource languages. What does it take to break the 200 language barrier while ensuring safe, high quality results, all while keeping ethical considerations in mind? In No Language Left Behind, we took on this challenge by first contextualizing the need for low-resource language translation support through exploratory interviews with native speakers. Then, we created datasets and models aimed at narrowing the performance gap between low and high-resource languages. More specifically, we developed a conditional compute model based on Sparsely Gated Mixture of Experts that is trained on data obtained with novel and effective data mining techniques tailored for low-resource languages. We propose multiple architectural and training improvements to counteract overfitting while training on thousands of tasks. Critically, we evaluated the performance of over 40,000 different translation directions using a human-translated benchmark, Flores-200, and combined human evaluation with a novel toxicity benchmark covering all languages in Flores-200 to assess translation safety. Our model achieves an improvement of 44% BLEU relative to the previous state-of-the-art, laying important groundwork towards realizing a universal translation system. Finally, we open source all contributions described in this work, accessible at https://github.com/facebookresearch/fairseq/tree/nllb. Read More
#nlpAdobe’s latest AI prototype gives even the worst dancers some impressive moves
Project Motion Mix converts a still photograph into a dancing animation using machine learning
Adobe will reveal a prototype AI project later today at Adobe Max 2022 that can convert a still image of a person into an animated dancer. Adobe says that all you need to do is load a full-body picture into Project Motion Mix, and the system will turn that individual into an AI-controlled puppet, animating new dance moves.
The system uses a combination of AI-based motion generation and what Adobe is calling “human rendering technologies” to create its animations. The software lets users select from different dance styles, tweak the background, and add multiple dancers into one frame. However, it’s still just a prototype, and Adobe says it isn’t sure if or when the system might be added to its user-facing services. Read More
Meta touts AI that translates spoken-only language
Meta on Wednesday said that it built an artificial intelligence system that translates Hokkien into English even though the Taiwanese language lacks a standard written form.
The Silicon Valley tech titan that owns Facebook and Instagram billed the work at its Universal Speech Translator project as an effort to enable users from around the world to socialize regardless of the languages they speak.
…The fledgling system for translating Hokkien was billed by Meta as the first artificial intelligence-powered “speech-to-speech translation system developed for an unwritten language.” Read More