In a battle of AI versus AI, researchers are preparing for the coming wave of deepfake propaganda

… Deepfake detection as a field of research was begun a little over three years ago. Early work focused on detecting visible problems in the videos, such as deepfakes that didn’t blink. With time, however, the fakes have gotten better at mimicking real videos and become harder to spot for both people and detection tools.

There are two major categories of deepfake detection research. The first involves looking at the behavior of people in the videos. … Other researchers, including our team, have been focused on differences that all deepfakes have compared to real videos. Read More

#fake

Lidar used to cost $75,000—here’s how Apple brought it to the iPhone

How Apple made affordable lidar with no moving parts for the iPhone.

At Tuesday’s unveiling of the iPhone 12, Apple touted the capabilities of its new lidar sensor. Apple says lidar will enhance the iPhone’s camera by allowing more rapid focus, especially in low-light situations. And it may enable the creation of a new generation of sophisticated augmented reality apps. Read More

#big7, #image-recognition, #robotics

Phantom of the ADAS: Securing Advanced Driver-Assistance Systems from Split-Second Phantom Attacks

In this paper, we investigate “split-second phantom attacks,” a scientific gap that causes two commercial advanced driver-assistance systems (ADASs), Telsa Model X (HW 2.5 and HW 3) and Mobileye 630, to treat a depthless object that appears for a few milliseconds as a real obstacle/object. We discuss the challenge that split-second phantom attacks create for ADASs. We demonstrate how attackers can apply split-second phantom attacks remotely by embedding phantom road signs into an advertisement presented on a digital billboard which causes Tesla’s autopilot to suddenly stop the car in the middle of a road and Mobileye 630 to issue false notifications. We also demonstrate how attackers can use a projector in order to cause Tesla’s autopilot to apply the brakes in response to a phantom of a pedestrian that was projected on the road and Mobileye 630 to issue false notifications in response to a projected road sign. To counter this threat, we propose a countermeasure which can determine whether a detected object is a phantom or real using just the camera sensor. The countermeasure (GhostBusters) uses a “committee of experts” approach and combines the results obtained from four lightweight deep convolutional neural networks that assess the authenticity of an object based on the object’s light, context, surface, and depth. We demonstrate our countermeasure’s effectiveness (it obtains a TPR of 0.994 with an FPR of zero) and test its robustness to adversarial machine learning attacks. Read More

#adversarial, #robotics

NLP with CNNs

A step by step explanation, with a Keras implementation of the architecture.

Convolutional neural networks (CNNs) are the most widely used deep learning architectures in image processing and image recognition. Given their supremacy in the field of vision, it’s only natural that implementations on different fields of machine learning would be tried. In this article, I will try to explain the important terminology regarding CNNs from a natural language processing perspective, a short Keras implementation with code explanations will also be provided. Read More

#neural-networks, #nlp