The Blind Giant’s Quandary — Government's Role in setting AI Standards

FEDERAL AGENCIES SHOULD STAY IN THE BACK SEAT FOR AI STANDARD SETTING.

This comment is in response to the National Institute of Standards and Technology’s (NIST) request for information on artificial intelligence (AI) standards.

Even in the event of the market failing to provide the optimal level of the right standards, it does not imply that active government involvement would lead to a better outcome. In fact, as is well studied in the literature, government failure in standard setting is a possibility that should not be overlooked.

Stanford economist Paul David, known internationally for his contributions in the economics of science and technology, famously coined the risk of government failure in standard setting as the Blind Giant’s Quandary. Read More

#standards

Restoring Vision With Bionic Eyes: No Longer Science Fiction

Bionic vision might sound like science fiction, but Dr. Michael Beyeler is working on just that.

Originally from Switzerland, Dr. Beyeler is wrapping up his postdoctoral fellow at the University of Washington before moving to the University of California Santa Barbara this fall to head up the newly formed Bionic Vision Lab in the Departments of Computer Science and Psychological & Brain Sciences.

We spoke with him about this “deep fascination with the brain” and how he hopes his work will eventually be able to restore vision to the blind. Read More

#human, #vision

Exposing DeepFake Videos By Detecting Face Warping Artifacts

In this work, we describe a new deep learning based method that can effectively distinguish AI-generated fake videos (referred to as DeepFake videos hereafter) from real videos. Our method is based on the observations that current DeepFake algorithm can only generate images of limited resolutions, which need to be further warped to match the original faces in the source video. Such transforms leave distinctive artifacts in the resulting DeepFake videos, and we show that they can be effectively captured by convolutional neural networks (CNNs). Compared to previous methods which use a large amount of real and DeepFake generated images to train CNN classifier, our method does not need DeepFake generated images as negative training examples since we target the artifacts in affine face warping as the distinctive feature to distinguish real and fake images. The advantages of our method are two-fold: (1)Such artifacts can be simulated directly using simple image processing operations on a image to make it as negative ex-ample. Since training a DeepFake model to generate negative examples is time-consuming and resource-demanding,our method saves a plenty of time and resources in training data collection; (2) Since such artifacts are general existed in DeepFake videos from different sources, our method ismore robust compared to others. Our method is evaluated on two sets of DeepFake video datasets for its effectiveness in practice. Read More

#fake, #neural-networks

Detecting deepfakes by looking closely reveals a way to protect against them

Deepfake videos are hard for untrained eyes to detect because they can be quite realistic. Whether used as personal weapons of revenge, to manipulate financial markets or to destabilize international relations, videos depicting people doing and saying things they never did or said are a fundamental threat to the longstanding idea that “seeing is believing.” Not anymore.

Most deepfakes are made by showing a computer algorithm many images of a person, and then having it use what it saw to generate new face images. At the same time, their voice is synthesized, so it both looks and sounds like the person has said something new.

Now, our research can identify the manipulation of a video by looking closely at the pixels of specific frames. Taking one step further, we also developed an active measure to protect individuals from becoming victims of deepfakes. Read More

#fake

J Robot: Could Artificial Intelligence Actually Replace Reporters?

In the film I, Robot, loosely based upon stories from Isaac Asimov, Will Smith confronts a world where robots replace the functions of many humans. Will it happen for publications too, as “J Robots” (journalism robots) replace reporters? The newsroom will certainly never be the same.

If you think about it, the world of publishing has always seen machines take jobs away from humans, ever since the printing press churned out the Gutenberg Bible, eliminating one of the functions of monks who had painstakingly crafted well-scripted copies of books for thousands of years. Electric typewriters made their earlier counterparts obsolete, only to be ousted by personal computers. And how much has been written about the effect of the internet on newspapers and magazines, or digital journalism taking away ratings from radio and television? Read More

#augmented-intelligence, #nlp

Machine learning training puts Google and Nvidia on top

Artificial intelligence (AI) has advanced to the point where leading research universities and dozens of technology companies including Google and Nvidia are taking part in comparisons of their chips.

Results of the latest round of benchmarks released this week showed that both Nvidia and Google have demonstrated they can reduce from days to hours the compute time necessary to train deep neural networks used in some common AI applications.

“The new results are truly impressive,” Karl Freund, senior analyst for machine learning at Moor Insights & Strategy, wrote in a commentary posted on EE Times. Of the six benchmarks, Nvidia and Google each racked up three top spots. Nvidia reduced its run-time by up to 80% using the V100 TensorCore accelerator in the DGX2h building block. Read More

#big7, #nvidia

The Shining starring Jim Carrey — It’s a DeepFake

Read More

#fake, #videos

Augmenting Human Intelligence

Context is critical. As what was once mere data evolves into actionable intelligence, the context that binds that data becomes ever more essential.

Consider the word “java.” With no context around those four letters, you might not understand the reference or make any sort of connection. But if you add just one word to “java,” such as “development,” “island,” or “coffee,” the reference changes completely—and that’s with just a single word of context.

This is the type of active context and connection that the Brainspace engine provides. Read More

#augmented-intelligence, #human, #videos

Augmented Intelligence: A Collaboration of Humans and Machines

Read More

#augmented-intelligence, #human, #ted-talks, #videos

Catalytic: ‘RPA is the gateway drug for AI’

The immediate benefit of RPA is that it can eliminate a lot of repetitive manual labor and free up humans for what they are better at. But there’s also a side effect. RPA helps enterprises create a standardize framework for capturing data about how they execute processes as well as data about how processes can get delayed or stalled.

“If you set up RPA the right way by instrumenting the process, it’s possible to gather data to use as the training set for machine learning,” said Ted Shelton, Chief Revenue Officer at Catalytic, in an interview at Transform 2019. “RPA is the gateway drug for AI.” Read More

#microservices, #robotics