Modern text-to-speech algorithms are incredibly capable, and you needn’t look further for evidence than Google’s recently open-sourced SpecAugment or Translatotron — the latter can directly translate a person’s voice into another language while retaining tone and tenor. But there’s always room for improvement.
Toward that end, researchers at Microsoft recently detailed in a paper (“Almost Unsupervised Text to Speech and Automatic Speech Recognition“) an AI system that leverages unsupervised learning — a branch of machine learning that gleans knowledge from unlabeled, unclassified, and uncategorized test data — to achieve 99.84% word intelligibility accuracy and 11.7% PER for automatic speech recognition. All the more impressive, the model required only 200 audio clips and corresponding transcriptions. Read More
Daily Archives: May 23, 2019
Almost Unsupervised Text to Speech and Automatic Speech Recognition
Text to speech (TTS) and automatic speech recognition (ASR) are two dual tasks in speech processing and both achieve impressive performance thanks to the recent advance in deep learning and large amount of aligned speech and text data. However, the lack of aligned data poses a major practical problem for TTS and ASR on low resource languages. In this paper, by leveraging the dual nature of the two tasks, we propose an almost unsupervised learning method that only leverages few hundreds of paired data and extra unpaired data for TTS and ASR. Our method consists of the following components: (1) a denoising auto-encoder, which reconstructs speech and text sequences respectively to develop the capability of language modeling both in speech and text domain; (2) dual transformation, where the TTS model transforms the text y into speech xˆ, and the ASR model leverages the transformed pair (ˆx, y) for training, and vice versa, to boost the accuracy of the two tasks; (3) bidirectional sequence modeling, which addresses error propagation especially in the long speech and text sequence when training with few paired data; (4) a unified model structure, which combines all the above components for TTS and ASR based on Transformer model. Our method achieves 99.84% in terms of word level intelligible rate and 2.68 MOS for TTS, and 11.7% PER for ASR on LJSpeech dataset, by leveraging only 200 paired speech and text data (about 20 minutes audio), together with extra unpaired speech and text data. Read More
The touchy task of making robots seem human — but not too human
2017 IS POISED to be the year of the robot assistant. If you’re in the market, you’ll have plenty to choose from. Some look like descendants of Honda’s Asimo—shiny white bots with heads, eyes, arms, and legs. Ubtech’s Lynx has elbows, knees, and hands, which it can use to teach you yoga poses, of all things, while Hanson Robotics’ Sophia botapproaches *Ex Machina-*levels of believability. Others, like Amazon’s Alexa and Google Home, have no form. They come baked into simple speakers and desktop appliances. It seems most robot helpers take one of these two shapes: humanoid or monolithic.
Yet, a middle ground is emerging—one with just a hint of anthropomorphism. LG’s Alexa-powered Hub robot has a “body” with a gently nipped in “waist,” and a screen with two blinking eyes. ElliQ, a tabletop robot assistant for the elderly that debuted last week at the Design Museum in London, features an hourglass-shaped “body” and a “head” that swivels. Kuri, a penguin-like helper from Mayfield Robotics, scoots around and looks at you but doesn’t speak. This is all deliberate. Designers and roboticists say a suggestion, rather than a declaration, of anthropomorphism could help people form closer connections with their robot assistants.
But don’t overdo it—the more like C-3PO your robot looks, the greater the risk of disappointment. Read More