A growing number of tools now let you stop facial recognition systems from training on your personal photos
Uploading personal photos to the internet can feel like letting go. Who else will have access to them, what will they do with them—and which machine-learning algorithms will they help train?
The company Clearview has already supplied US law enforcement agencies with a facial recognition tool trained on photos of millions of people scraped from the public web. But that was likely just the start. Anyone with basic coding skills can now develop facial recognition software, meaning there is more potential than ever to abuse the tech in everything from sexual harassment and racial discrimination to political oppression and religious persecution.
A number of AI researchers are pushing back and developing ways to make sure AIs can’t learn from personal data. Two of the latest are being presented this week at ICLR, a leading AI conference. Read More
Tag Archives: Adversarial
Deep Learning-Based Autonomous Driving Systems: A Survey of Attacks and Defenses
The rapid development of artificial intelligence,especially deep learning technology, has advanced autonomous driving systems (ADSs) by providing precise control decisions to counterpart almost any driving event, spanning from anti-fatigue safe driving to intelligent route planning. However, ADSs are still plagued by increasing threats from different attacks, which could be categorized into physical attacks, cyber attacks and learning-based adversarial attacks. Inevitably, the safety and security of deep learning-based autonomous driving are severely challenged by these attacks, from which the countermeasures should be analyzed and studied comprehensively to mitigate all potential risks. This survey provides a thorough analysis of different attacks that may jeopardize ADSs, as well as the corresponding state-of-the-art defense mechanisms. The analysis is unrolled by taking an in-depth overview of each step in the ADS workflow,covering adversarial attacks for various deep learning models and attacks in both physical and cyber context. Furthermore, some promising research directions are suggested in order to improve deep learning-based autonomous driving safety, including model robustness training, model testing and verification, and anomaly detection based on cloud/edge servers. Read More
#adversarial, #cyberPractical Privacy with Synthetic Data
In this post, we will implement a practical attack on synthetic data models that was described in the Secret Sharer: Evaluating and Testing Unintended Memorization in Neural Networks by Nicholas Carlini et. al. We will use this attack to see how synthetic data models with various neural network and differential privacy parameter settings actually work at protecting sensitive data and secrets in datasets. And there are some pretty surprising results. Read More
How Robust are Randomized Smoothing based Defenses to Data Poisoning?
Predictions of certifiably robust classifiers remain constant in a neighborhood of a point, making them resilient to test-time attacks with a guarantee. In this work, we present a previously unrecognized threat to robust machine learning models that highlights the importance of training-data quality in achieving high certified adversarial robustness. Specifically, we propose a novel bilevel optimization based data poisoning attack that degrades the robustness guarantees of certifiably robust classifiers. Unlike other poisoning attacks that reduce the accuracy of the poisoned models on a small set of target points, our attack reduces the average certified radius(ACR) of an entire target class in the dataset. Moreover, our attack is effective even when the victim trains the models from scratch using state-of-the-art robust training methods such as Gaussian data augmentation[8], MACER[36], and SmoothAdv[29] that achieve high certified adversarial robustness. To make the attack harder to detect, we use clean-label poisoning points with imperceptible distortions. The effectiveness of the proposed method is evaluated by poisoning MNIST and CIFAR10 datasets and training deep neural networks using previously mentioned training methods and certifying the robustness with randomized smoothing. The ACR of the target class, for models trained on generated poi-son data, can be reduced by more than 30%. Moreover, the poisoned data is transferable to models trained with different training methods and models with different architectures. Read More
Adversarial training reduces safety of neural networks in robots: Research
There’s a growing interest in employing autonomous mobile robots in open work environments such as warehouses, especially with the constraints posed by the global pandemic. And thanks to advances in deep learning algorithms and sensor technology, industrial robots are becoming more versatile and less costly.
But safety and security remain two major concerns in robotics.
… But adversarial training can have a significantly negative impact on the safety of robots, the researchers at IST Austria, MIT, and TU Wien discuss in a paper titled “Adversarial Training is Not Ready for Robot Learning.” Their paper, which has been accepted at the International Conference on Robotics and Automation (ICRA 2021), shows that the field needs new ways to improve adversarial robustness in deep neural networks used in robotics without reducing their accuracy and safety. Read More
EU report warns that AI makes autonomous vehicles ‘highly vulnerable’ to attack
The dream of autonomous vehicles is that they can avoid human error and save lives, but a new European Union Agency for Cybersecurity (ENISA) report has found that autonomous vehicles are “highly vulnerable to a wide range of attacks” that could be dangerous for passengers, pedestrians, and people in other vehicles. Attacks considered in the report include sensor attacks with beams of light, overwhelming object detection systems, back-end malicious activity, and adversarial machine learning attacks presented in training data or the physical world.
“The attack might be used to make the AI ‘blind’ for pedestrians by manipulating for instance the image recognition component in order to misclassify pedestrians. This could lead to havoc on the streets, as autonomous cars may hit pedestrians on the road or crosswalks,” the report reads. “The absence of sufficient security knowledge and expertise among developers and system designers on AI cybersecurity is a major barrier that hampers the integration of security in the automotive sector.” Read More
Why Some Models Leak Data
Machine learning models use large amounts of data, some of which can be sensitive. If they’re not trained correctly, sometimes that data is inadvertently revealed.
… Models of real world data are often quite complex—this can improve accuracy, but makes them more susceptible to unexpectedly leaking information. Medical models have inadvertently revealed patients’ genetic markers. Language models have memorized credit card numbers. Faces can even be reconstructed from image models.
… Training models with differential privacy stops the training data from leaking by limiting how much the model can learn from any one data point. Differentially private models are still at the cutting edge of research, but they’re being packaged into machine learning frameworks, making them much easier to use. When it isn’t possible to train differentially private models, there are also tools that can measure how much data is the model memorizing. Read More
Adversarial Interaction Attack: Fooling AI to Misinterpret Human Intentions
Understanding the actions of both humans and artificial intelligence (AI) agents is important before modern AI systems can be fully integrated into our daily life. In this paper, we show that, despite their current huge success, deep learning based AI systems can be easily fooled by subtle adversarial noise to misinterpret the intention of an action in interaction scenarios. Based on a case study of skeleton-based human interactions, we propose a novel adversarial attack on interactions, and demonstrate how DNN-based interaction models can be tricked to predict the participants’ reactions in unexpected ways. From a broader perspective, the scope of our proposed attack method is not confined to problems related to skeleton data but can also be extended to any type of problems involving sequential regressions. Our study highlights potential risks in the interaction loop with AI and humans, which need to be carefully addressed when deploying AI systems in safety-critical applications. Read More
#adversarialAdversarial Threats to DeepFake Detection: A Practical Perspective
Facially manipulated images and videos or DeepFakes can be used maliciously to fuel misinformation or defame individuals. Therefore, detecting DeepFakes is crucial to increase the credibility of social media platforms and other media sharing web sites. State-of-the art DeepFake detection techniques rely on neural network based classification models which are known to be vulnerable to adversarial examples. In this work, we study the vulnerabilities of state-of-the-art DeepFake detection methods from a practical stand point. We perform adversarial attacks on DeepFake detectors in a black box setting where the adversary does not have complete knowledge of the classification models. We study the extent to which adversarial perturbations transfer across different models and propose techniques to improve the transferability of adversarial examples. We also create more accessible attacks using Universal Adversarial Perturbations which pose a very feasible attack scenario since they can be easily shared amongst attackers. We perform our evaluations on the winning entries of the DeepFake Detection Challenge (DFDC) and demonstrate that they can be easily bypassed in a practical attack scenario by designing transferable and accessible adversarial attacks. Read More
Adversarial Deepfakes: Evaluating Vulnerability of Deepfake Detectors to Adversarial Examples
Recent advances in video manipulation techniques have made the generation of fake videos more accessible than ever before. Manipulated videos can fuel disinformation and reduce trust in media. Therefore detection of fake videos has garnered immense interest in academia and industry. Recently developed Deepfake detection methods rely on DeepNeural Networks (DNNs) to distinguish AI-generated fake videos from real videos. In this work, we demonstrate that it is possible to bypass such detectors by adversarially modifying fake videos synthesized using existing Deepfake generation methods. We further demonstrate that our adversarial perturbations are robust to image and video compression codecs, making them a real-world threat. We present pipelines in both white-box and black-box attack scenarios that can fool DNN based Deepfake detectors into classifying fake videos as real. Read More