With the trend of adversarial attacks, researchers attempt to fool trained object detectors in 2D scenes. Among many of them, an intriguing new form of attack with potential real-world usage is to append adversarial patches (e.g. logos) to images. Nevertheless, much less have we known about adversarial attacks from 3D rendering views, which is essential for the attack to be persistently strong in the physical world. This paper presents a new 3D adversarial logo attack: we construct an arbitrary shape logo from a 2D texture image and map this image into a 3D adversarial logo via a texture mapping called logo transformation. The resulting 3D adversarial logo is then viewed as an adversarial texture enabling easy manipulation of its shape and position. This greatly extends the versatility of adversarial training for computer graphics synthesized imagery. Contrary to the traditional adversarial patch, this new form of attack is mapped into the 3D object world and back-propagates to the 2D image domain through differentiable rendering. In addition, and unlike existing adversarial patches, our new 3D adversarial logo is shown to fool state-of-the-art deep object detectors robustly under model rotations, leading to one step further for realistic attacks in the physical world. Read More
#adversarialMonthly Archives: July 2020
FoolChecker: A platform to check how robust an image is against adversarial attacks
Deep neural networks (DNNs) have so far proved to be highly promising for a wide range of applications, including image and audio classification. Nonetheless, their performance heavily relies on the amount of data used to train them, and large datasets are not always readily available.
When DNNs are not adequately trained, they are more prone to misclassifying data. This makes them vulnerable to a particular class of cyber-attacks known as adversarial attacks. In an adversarial attack, an attacker creates replicas of real data that are designed to fool a DNN (i.e., adversarial data), tricking it into misclassifying data and thus impairing its function.
In recent years, computer scientists and developers have proposed a variety of tools that could protect deep neural architectures from these attacks, by detecting the differences between original and adversarial data. However, so far, none of these solutions has proved universally effective. Read More
IBM completes successful field trials on Fully Homomorphic Encryption
FHE allows computation of still-encrypted data, without sharing the secrets.
Yesterday, Ars spoke with IBM Senior Research Scientist Flavio Bergamaschi about the company’s recent successful field trials of Fully Homomorphic Encryption. We suspect many of you will have the same questions that we did—beginning with “what is Fully Homomorphic Encryption?”
FHE is a type of encryption that allows direct mathematical operations on the encrypted data. Upon decryption, the results will be correct.
…You don’t ever have to share a key with the third party doing the computation; the data remains encrypted with a key the third party never received. Read More
Introducing the Model Card Toolkit for Easier Model Transparency Reporting
Machine learning (ML) model transparency is important across a wide variety of domains that impact peoples’ lives, from healthcare to personal finance to employment. The information needed by downstream users will vary, as will the details that developers need in order to decide whether or not a model is appropriate for their use case. This desire for transparency led us to develop a new tool for model transparency, Model Cards, which provide a structured framework for reporting on ML model provenance, usage, and ethics-informed evaluation and give a detailed overview of a model’s suggested uses and limitations that can benefit developers, regulators, and downstream users alike.
Over the past year, we’ve launched Model Cards publicly and worked to create Model Cards for open-source models released by teams across Google. Read More
Google’s TF-Coder tool automates machine learning model design
Researchers at Google Brain, one of Google’s AI research divisions, developed an automated tool for programming in machine learning frameworks like TensorFlow. They say it achieves better-than-human performance on some challenging development tasks, taking seconds to solve problems that take human programmers minutes to hours. Read More
Eye On A.I. — Episode 45 – Jack Shanahan
This week I speak to Lieutenant General Jack Shanahan, recently retired Director of the Pentagon’s Joint Artificial Intelligence Center,or JAIC. He was instrumental in starting Project Maven to integrate state-of-the-art computer vision into drone technology. He then started the JAIC, the central hub for the military’s AI efforts. Gen. Shanahan spoke about the challenges of nurturing innovation within a rigid and multilayered organization like the DOD and the threats the US faces ahead. Read More
This spooky deepfake AI mimics dozens of celebs and politicians
The voice sounds oddly familiar, like I’ve heard it a thousand times before — and I have. Indeed, it sounds just like Sir David Attenborough. But it’s not him. It’s not a person at all.
It’s simply a piece of AI software called Vocodes. The tool, which I can best describe as a deepfake generator, can mimic the voices of a slew of politicians and celebrities including Donald Trump, Barack Obama, Bryan Cranston, Danny Devito, and a dozen more. Read More
China’s Quest for AI Dominance – And How It’s Going
Whether you get your news from Facebook or from the Wall Street Journal you can’t help having heard that China is out to displace the US as the world leader in AI. Variously you may have heard that it’s already happened or soon inevitably will.
The twin questions of when they will succeed (is it inevitable) or whether they will succeed (if ever) is one I get all the time. As a red-white-and-blue American I hope not. As a world citizen of the tribe of data scientists I wonder why we can’t just all get along. And as those divided feelings should presage, the current state of this struggle is about both competition and cooperation, and also about unintended consequences. Read More
Competing in Artificial Intelligence Chips: China’s Challenge amid Technology War
This special report assesses the challenges that China is facing in developing its artificial intelligence (AI) industry due to unprecedented US technology export restrictions. A central proposition is that China’s achievements in AI lack a robust foundation in leading-edge AI chips, and thus the country is vulnerable to externally imposed supply disruptions. Success in AI requires mastery of data, algorithms and computing power, which, in turn, is determined by the performance of AI chips. Increasing computing power that is cost-effective and energy-saving is the indispensable third component of this magic AI triangle.
Drawing on field research conducted in 2019, this report contributes to the literature by addressing China’s arguably most immediate and difficult AI challenges. Read More