Deepfake “Amazon workers” are sowing confusion on Twitter. That’s not the problem.

The accounts are likely just parodies, not part of a sinister corporate strategy, but they illustrate the kind of thing that could happen someday.

The news: Ahead of a landmark vote that could lead to the formation of the first-ever labor union at a US-based Amazon warehouse, new Twitter accounts purporting to be Amazon employees started appearing. The profiles used deepfake photos as profile pictures and were tweeting some pretty laughable, over-the-top defenses of Amazon’s working practices. They didn’t seem real, but they still led to confusion among the public. Was Amazon really behind them? Was this some terrible new anti-union social media strategy? The answer is almost certainly not—but the use of deepfakes in this context points to a more concerning trend overall. Read More

#fake

Lost Tapes of the 27 Club

Using AI to create the album lost to music’s mental health crisis.

As long as there’s been popular music, musicians and crews have struggled with mental health at a rate far exceeding the general adult population. And this issue hasn’t just been ignored. It’s been romanticized, by things like the 27 Club—a group of musicians whose lives were all lost at just 27 years old.

To show the world what’s been lost to this mental health crisis, we’ve used artificial intelligence to create the album the 27 Club never had the chance to. Through this album, we’re encouraging more music industry insiders to get the mental health support they need, so they can continue making the music we all love for years to come.

Because even AI will never replace the real thing. Read More

#fake

The hidden fingerprint inside your photos

They say a picture is worth a thousand words. Actually, there’s a great deal more hidden inside the modern digital image, says researcher Jerone Andrews.

… When you take a photo, your smartphone or digital camera stores “metadata” within the image file. This automatically and parasitically burrows itself into every photo you take. It is data about data, providing identifying information such as when and where an image was captured, and what type of camera was used.

…But metadata is not the only thing hidden in your photos. There is also a unique personal identifier linking every image you capture to the specific camera used. Read More

#fake, #image-recognition

Could The Simpsons Replace Its Voice Actors With AI?

Deepfake technology can make convincing replicas from a limited amount of data, and the show has 30 years worth of audio to work from.

In May 2015, The Simpsons voice actor Harry Shearer—who plays a number of key characters including, quite incredibly, both Mr. Burns and Waylon Smithers—announced that he was leaving the show … Fox, the producer of The Simpsons, was looking to cut costs— and was threatening to cancel the series unless the voice actors took a 30 percent pay cut. … Shearer (who had been critical of the show’s declining quality) refused to sign. …But you’ll never stop The Simpsons. After a few months, Shearer relented and signed a new deal.

…But maybe the producers of the show don’t actually need voice actors anymore. In a recent episode, Edna Krabappel—Bart’s long-suffering teacher, whose character was retired from the show after the death of voice actor Marcia Wallace in 2013—was brought back for a final farewell using recordings that had been made for previous episodes.

Advances in computing power mean that you could extend that principle to any character. Deepfake technology can make convincing lookalikes from a limited amount of training data, and the producers of the show have 30 years worth of audio to work from. So could The Simpsons replace its voice cast with an AI? Read More

#fake

Am I a Real or Fake Celebrity?

Recently, significant advancements have been made in face recognition technologies using Deep Neural Networks. As a result, companies such as Microsoft, Amazon, and Naver offer highly accurate commercial face recognition web services for diverse applications to meet the end-user needs. Naturally, however, such technologies are threatened persistently, as virtually any individual can quickly implement impersonation attacks. In particular, these attacks can be a significant threat for authentication and identification services, which heavily rely on their underlying face recognition technologies’ accuracy and robustness. Despite its gravity, the issue regarding deepfake abuse using commercial web APIs and their robustness has not yet been thoroughly investigated. This work provides a measurement study on the robustness of black-box commercial face recognition APIs against Deepfake Impersonation (DI) attacks using celebrity recognition APIs as an example case study. We use five deepfake datasets, two of which are created by us and planned to be released. More specifically, we measure attack performance based on two scenarios (targeted and non-targeted) and further analyze the differing system behaviors using fidelity, confidence, and similarity metrics. Accordingly, we demonstrate how vulnerable face recognition technologies from popular companies are to DI attack, achieving maximum success rates of 78.0% and 99.9% for targeted (i.e., precise match) and non-targeted (i.e., match with any celebrity) attacks, respectively. Moreover, we propose practical defense strategies to mitigate DI attacks, reducing the attack success rates to as low as 0% and 0.02% for targeted and non-targeted attacks, respectively. Read More

#fake, #image-recognition

Tom Cruise deepfake creator says public shouldn’t be worried about ‘one-click fakes’

19

Weeks of work and a top impersonator were needed to make the viral clips

When a series of spookily convincing Tom Cruise deepfakes went viral on TikTok, some suggested it was a chilling sign of things to come — harbinger of an era where AI will let anyone make fake videos of anyone else. The video’s creator, though, Belgium VFX specialist Chris Ume, says this is far from the case. Speaking to The Verge about his viral clips, Ume stresses the amount of time and effort that went into making each deepfake, as well as the importance of working with a top-flight Tom Cruise impersonator, Miles Fisher.

“You can’t do it by just pressing a button,” says Ume. “That’s important, that’s a message I want to tell people.” Each clip took weeks of work, he says, using the open-source DeepFaceLab algorithm as well as established video editing tools. “By combining traditional CGI and VFX with deepfakes, it makes it better. I make sure you don’t see any of the glitches.” Read More

#fake, #image-recognition

Most People Can’t Tell the Difference Between Art Made by Humans and by AI, a Rather Concerning New Study Says

“There is a battle rising between humans and machines.”

No, that’s not a voiceover from another Matrix or Terminator movie. That’s the first line of a new study on how humans perceive artworks made by computers versus those made by humans, and, according to the findings, published in the journal Empirical Studies in the Arts, things don’t look great for the humans.

When the researcher Harsha Gangadharbatla saw the headlines three years ago about a painting created via artificial intelligence by the collective Obvious selling for $432,500 at Christie’s, he didn’t just shake his head at the price. He wondered what this might teach us about how humans perceive art. Read More

#fake

The AI Research Paper Was Real. The ‘Coauthor’ Wasn’t

An IBM researcher found his name on two papers with which he had no connection. A different paper listed a fictitious author by the name of “Bill Franks.”

David Cox, the co-director of a prestigious artificial intelligence lab in Cambridge, Massachusetts, was scanning an online computer science bibliography in December when he noticed something odd—his name listed as an author alongside three researchers in China whom he didn’t know on two papers he didn’t recognize.

At first, he didn’t think much of it. The name Cox isn’t uncommon, so he figured there must be another David Cox doing AI research. “Then I opened up the PDF and saw my own picture looking back at me,” Cox says. “It was unbelievable.” Read More

#fake

Adversarial Threats to DeepFake Detection: A Practical Perspective

Facially manipulated images and videos or DeepFakes can be used maliciously to fuel misinformation or defame individuals. Therefore, detecting DeepFakes is crucial to increase the credibility of social media platforms and other media sharing web sites. State-of-the art DeepFake detection techniques rely on neural network based classification models which are known to be vulnerable to adversarial examples. In this work, we study the vulnerabilities of state-of-the-art DeepFake detection methods from a practical stand point. We perform adversarial attacks on DeepFake detectors in a black box setting where the adversary does not have complete knowledge of the classification models. We study the extent to which adversarial perturbations transfer across different models and propose techniques to improve the transferability of adversarial examples. We also create more accessible attacks using Universal Adversarial Perturbations which pose a very feasible attack scenario since they can be easily shared amongst attackers. We perform our evaluations on the winning entries of the DeepFake Detection Challenge (DFDC) and demonstrate that they can be easily bypassed in a practical attack scenario by designing transferable and accessible adversarial attacks. Read More

#adversarial, #big7, #fake

Adversarial Deepfakes: Evaluating Vulnerability of Deepfake Detectors to Adversarial Examples

Recent advances in video manipulation techniques have made the generation of fake videos more accessible than ever before. Manipulated videos can fuel disinformation and reduce trust in media. Therefore detection of fake videos has garnered immense interest in academia and industry. Recently developed Deepfake detection methods rely on DeepNeural Networks (DNNs) to distinguish AI-generated fake videos from real videos. In this work, we demonstrate that it is possible to bypass such detectors by adversarially modifying fake videos synthesized using existing Deepfake generation methods. We further demonstrate that our adversarial perturbations are robust to image and video compression codecs, making them a real-world threat. We present pipelines in both white-box and black-box attack scenarios that can fool DNN based Deepfake detectors into classifying fake videos as real. Read More

#adversarial, #fake