Are we looking at the evolution or extinction of editing and post-production?

When we teased out Eric Escobar’s recent piece on social media, a number of people jumped on us for publishing click-bait. That was obviously not the intention, as anyone who read the article can attest, but it’s an understandable reaction to seeing “Video Editor: A job on the edge of extinction”. After all, how often do you see a headline along the lines of “The Machines are Coming for Your Job!” only to find out the headline is just as misleading as the info the article possesses.

No, video editors aren’t going to all find themselves extinct or unemployed anytime soon, but things have changed for them and for other post-production professionals. One of the biggest factors in that change is automation, as it has and will continue to have a major impact on the livelihoods of everyone working in post. Instant sub clips, project organization and fast editing while ingesting are just a few of the ways post-production has been impacted by automation.

Change can be a good thing though. Read More

#nlp

Video Editor: A job on the edge of extinction

I know this post has a clickbait sounding title, and for that I apologize, but I’m not writing this for click throughs or ad impressions. This is about the things that software can do now and guessing at what it will do in the very, very near future.

Right now, software “reads” articles and emails, and this is how Google analyzes and ranks what we write and figures out what to sell us. Software even “writes” articles, more and more everyday. Software doesn’t write the opinion pieces, or the long, interesting New Yorker think-piece, it goes for the easy stuff: financials, sports scores, crime beats. The kind of stuff that feels all boiler-plate and perfunctory.

This is old news, a change that has already transformed publishing and will change it more as the tech gets better. Read More

#nlp

Deep Visual-Semantic Alignments for Generating Image Descriptions

We present a model that generates natural language descriptions of images and their regions. Our approach lever-ages datasets of images and their sentence descriptions to learn about the inter-modal correspondences between language and visual data. Our alignment model is based on a novel combination of Convolutional Neural Networks over image regions, bidirectional Recurrent Neural Networks over sentences, and a structured objective that aligns the two modalities through a multimodal embedding. We then describe a Multimodal Recurrent Neural Network architecture that uses the inferred alignments to learn to generate novel descriptions of image regions. We demonstrate that tour alignment model produces state of the art results in retrieval experiments on Flickr8K, Flickr30K and MSCOCO datasets. We then show that the generated descriptions significantly outperform retrieval baselines on both full images and on a new dataset of region-level annotations. Read More

#nlp

Microsoft’s AI generates realistic speech with only 200 training samples

Modern text-to-speech algorithms are incredibly capable, and you needn’t look further for evidence than Google’s recently open-sourced SpecAugment or Translatotron — the latter can directly translate a person’s voice into another language while retaining tone and tenor. But there’s always room for improvement.

Toward that end, researchers at Microsoft recently detailed in a paper (“Almost Unsupervised Text to Speech and Automatic Speech Recognition“) an AI system that leverages unsupervised learning — a branch of machine learning that gleans knowledge from unlabeled, unclassified, and uncategorized test data — to achieve 99.84% word intelligibility accuracy and 11.7% PER for automatic speech recognition. All the more impressive, the model required only 200 audio clips and corresponding transcriptions. Read More

#nlp

Almost Unsupervised Text to Speech and Automatic Speech Recognition

Text to speech (TTS) and automatic speech recognition (ASR) are two dual tasks in speech processing and both achieve impressive performance thanks to the recent advance in deep learning and large amount of aligned speech and text data. However, the lack of aligned data poses a major practical problem for TTS and ASR on low resource languages. In this paper, by leveraging the dual nature of the two tasks, we propose an almost unsupervised learning method that only leverages few hundreds of paired data and extra unpaired data for TTS and ASR. Our method consists of the following components: (1) a denoising auto-encoder, which reconstructs speech and text sequences respectively to develop the capability of language modeling both in speech and text domain; (2) dual transformation, where the TTS model transforms the text y into speech xˆ, and the ASR model leverages the transformed pair (ˆx, y) for training, and vice versa, to boost the accuracy of the two tasks; (3) bidirectional sequence modeling, which addresses error propagation especially in the long speech and text sequence when training with few paired data; (4) a unified model structure, which combines all the above components for TTS and ASR based on Transformer model. Our method achieves 99.84% in terms of word level intelligible rate and 2.68 MOS for TTS, and 11.7% PER for ASR on LJSpeech dataset, by leveraging only 200 paired speech and text data (about 20 minutes audio), together with extra unpaired speech and text data. Read More

#nlp

The touchy task of making robots seem human — but not too human

2017 IS POISED to be the year of the robot assistant. If you’re in the market, you’ll have plenty to choose from. Some look like descendants of Honda’s Asimo—shiny white bots with heads, eyes, arms, and legs. Ubtech’s Lynx has elbows, knees, and hands, which it can use to teach you yoga poses, of all things, while Hanson Robotics’ Sophia botapproaches *Ex Machina-*levels of believability. Others, like Amazon’s Alexa and Google Home, have no form. They come baked into simple speakers and desktop appliances. It seems most robot helpers take one of these two shapes: humanoid or monolithic.

Yet, a middle ground is emerging—one with just a hint of anthropomorphism. LG’s Alexa-powered Hub robot has a “body” with a gently nipped in “waist,” and a screen with two blinking eyes. ElliQ, a tabletop robot assistant for the elderly that debuted last week at the Design Museum in London, features an hourglass-shaped “body” and a “head” that swivels. Kuri, a penguin-like helper from Mayfield Robotics, scoots around and looks at you but doesn’t speak. This is all deliberate. Designers and roboticists say a suggestion, rather than a declaration, of anthropomorphism could help people form closer connections with their robot assistants.

But don’t overdo it—the more like C-3PO your robot looks, the greater the risk of disappointment. Read More

#robotics

A Day in the Life of a Kiva Robot

Read More

#robotics

What is a robot?

Editor’s note: This is the first entry in a new video series, HardWIRED: Welcome to the Robotic Future, in which we explore the many fascinating machines that are transforming society. And we can’t do that without first defining what a robot even is.

When you hear the word “robot,” the first thing that probably comes to mind is a silvery humanoid, à la The Day the Earth Stood Still or C-3PO (more golden, I guess, but still metallic). But there’s also the Roomba, and autonomous drones, and technically also self-driving cars. A robot can be a lot of things these days―and this is just the beginning of their proliferation.

With so many different kinds of robots, how do you define what one is? It’s a physical thing―engineers agree on that, at least. But ask three different roboticists to define a robot and you’ll get three different answers. This isn’t a trivial semantic conundrum: Thinking about what a robot really is has implications for how humanity deals with the unfolding robo-revolution. Read More

#robotics

Artificial Intelligence and Collective Intelligence in Teams

Read More

#collective-intelligence

Artificial Intelligence and Collective Intelligence

The vision of artificial intelligence (AI) is often manifested through an autonomous software module (agent) in a complex and uncertain environment. The agent is capable of thinking ahead and acting for long periods of time in accordance with its goals/objectives. It is also capable of learning and refining its understanding of the world. The agent may accomplish this based on its own experience, or from the feedback provided by humans. Famous recent examples include self-driving cars (Thrun 2006) and the IBM Jeopardy player Watson (Ferrucci et al. 2010). This chapter explores the immense value of AI techniques for collective intelligence, including ways to make interactions between large numbers of humans more efficient.

By defining collective intelligence as “groups of individuals acting collectively in an intelligent manner,” one soon wishes to nail down the meaning of individual. In this chapter, individuals may be software agents and/or people and the collective may consist of a mixture of both. The rise of collective intelligence allows novel possibilities of seamlessly integrating machine and human intelligence at a large scale – one of the holy grails of AI (known in the literature as mixed-initiative systems (Horvitz 2007)). Our chapter focuses on one such integration – the use of machine intelligence for the management of crowdsourcing platforms (Weld, Mausam, and Dai 2011). Read More

#collective-intelligence