A.I. is translating messages of long-lost languages

Researchers from MIT and Google Brain discover how to use deep learning to decipher ancient languages.

The technique can be used to read languages that died long ago.

The method builds on the ability of machines to quickly complete monotonous tasks. Read More

#nlp

Faster Neural Network Training with Data Echoing

In the twilight of Moore’s law, GPUs and other specialized hardware accelerators have dramatically sped up neural network training. However, earlier stages of the training pipeline, such as disk I/O and data preprocessing, do not run on accelerators. As accelerators continue to improve, these earlier stages will increasingly become the bottleneck. In this paper, we introduce “data echoing,” which reduces the total computation used by earlier pipeline stages and speeds up training whenever computation upstream from accelerators dominates the training time. Data echoing reuses (or “echoes”) intermediate outputs from earlier pipeline stages in order to reclaim idle capacity. We investigate the behavior of different data echoing algorithms on various workloads, for various amounts of echoing, and for various batch sizes. We find that in all settings, at least one data echoing algorithm can match the baseline’s predictive performance using less upstream computation. In some cases, data echoing can even compensate for a 4x slower input pipeline. Read More

#neural-networks, #training

Deep Learning State of the Art (2019) – MIT

Read More

#deep-learning, #videos

China Internet Report 2019

China has emerged on the world stage with a host of global tech companies that are innovative and competitive. And increasingly, their successes are being studied and replicated in other markets. This report, informed by on-the-ground reporting by the South China Morning Post and Abacus, offers insights into China’s tech trailblazers and the big important trends shaping the world’s biggest internet community. Read More

#china, #china-vs-us

AI Can Now Create Artificial People – What Does That Mean For Humans?

When DataGrid, Inc. announced it successfully developed an AI system capable of generating high-quality photorealistic Japanese faces, it was impressive. But now the company has gone even further. Its artificial intelligence (AI) system can now create not only faces and hair from a variety of ethnicities, but bodies that can move and wear any outfit. While these images are fictitious, they are incredibly photorealistic. Read More

#fake, #gans

Computer vision harvesting. 4 algorithms simultaneously identifying: – License plate number recognition – Brand and model type recognition – Logo detection – Car color recognition.

Read More

#image-recognition, #videos

The Blind Giant’s Quandary — Government's Role in setting AI Standards

FEDERAL AGENCIES SHOULD STAY IN THE BACK SEAT FOR AI STANDARD SETTING.

This comment is in response to the National Institute of Standards and Technology’s (NIST) request for information on artificial intelligence (AI) standards.

Even in the event of the market failing to provide the optimal level of the right standards, it does not imply that active government involvement would lead to a better outcome. In fact, as is well studied in the literature, government failure in standard setting is a possibility that should not be overlooked.

Stanford economist Paul David, known internationally for his contributions in the economics of science and technology, famously coined the risk of government failure in standard setting as the Blind Giant’s Quandary. Read More

#standards

Restoring Vision With Bionic Eyes: No Longer Science Fiction

Bionic vision might sound like science fiction, but Dr. Michael Beyeler is working on just that.

Originally from Switzerland, Dr. Beyeler is wrapping up his postdoctoral fellow at the University of Washington before moving to the University of California Santa Barbara this fall to head up the newly formed Bionic Vision Lab in the Departments of Computer Science and Psychological & Brain Sciences.

We spoke with him about this “deep fascination with the brain” and how he hopes his work will eventually be able to restore vision to the blind. Read More

#human, #vision

Exposing DeepFake Videos By Detecting Face Warping Artifacts

In this work, we describe a new deep learning based method that can effectively distinguish AI-generated fake videos (referred to as DeepFake videos hereafter) from real videos. Our method is based on the observations that current DeepFake algorithm can only generate images of limited resolutions, which need to be further warped to match the original faces in the source video. Such transforms leave distinctive artifacts in the resulting DeepFake videos, and we show that they can be effectively captured by convolutional neural networks (CNNs). Compared to previous methods which use a large amount of real and DeepFake generated images to train CNN classifier, our method does not need DeepFake generated images as negative training examples since we target the artifacts in affine face warping as the distinctive feature to distinguish real and fake images. The advantages of our method are two-fold: (1)Such artifacts can be simulated directly using simple image processing operations on a image to make it as negative ex-ample. Since training a DeepFake model to generate negative examples is time-consuming and resource-demanding,our method saves a plenty of time and resources in training data collection; (2) Since such artifacts are general existed in DeepFake videos from different sources, our method ismore robust compared to others. Our method is evaluated on two sets of DeepFake video datasets for its effectiveness in practice. Read More

#fake, #neural-networks

Detecting deepfakes by looking closely reveals a way to protect against them

Deepfake videos are hard for untrained eyes to detect because they can be quite realistic. Whether used as personal weapons of revenge, to manipulate financial markets or to destabilize international relations, videos depicting people doing and saying things they never did or said are a fundamental threat to the longstanding idea that “seeing is believing.” Not anymore.

Most deepfakes are made by showing a computer algorithm many images of a person, and then having it use what it saw to generate new face images. At the same time, their voice is synthesized, so it both looks and sounds like the person has said something new.

Now, our research can identify the manipulation of a video by looking closely at the pixels of specific frames. Taking one step further, we also developed an active measure to protect individuals from becoming victims of deepfakes. Read More

#fake