Social media accounts from the pro-Chinese political spam network Spamouflage Dragon started posting English-language videos that attacked American policy and the administration of U.S. President Donald Trump in June, as the rhetorical confrontation between the United States and China escalated.
The videos were clumsily made, marked by language errors and awkward automated voice-overs. Some of the accounts on YouTube and Twitter used AI-generated profile pictures, a technique that appears to be increasingly common in disinformation campaigns. The network did not appear to receive any engagement from authentic users across social media platforms, nor did it appear to seriously attempt to conceal its Chinese origin as it pivoted toward messaging related to U.S. politics. Read More
Tag Archives: Fake
FaceForensics++: Learning to Detect Manipulated Facial Images
The rapid progress in synthetic image generation and manipulation has now come to a point where it raises significant concerns for the implications towards society. At best,this leads to a loss of trust in digital content, but could potentially cause further harm by spreading false information or fake news. This paper examines the realism of state-of-the-art image manipulations, and how difficult it is to detect them, either automatically or by humans.
To standardize the evaluation of detection methods, we propose an automated benchmark for facial manipulation detection. In particular, the benchmark is based on Deep-Fakes [1], Face2Face [59], FaceSwap [2] and NeuralTextures [57] as prominent representatives for facial manipulations at random compression level and size. The benchmark is publicly available2and contains a hidden test set as well as a database of over1.8million manipulated images. This dataset is over an order of magnitude larger than comparable, publicly available, forgery datasets. Based on this data,we performed a thorough analysis of data-driven forgery detectors. We show that the use of additional domain-specific knowledge improves forgery detection to unprecedented accuracy, even in the presence of strong compression, and clearly outperforms human observers. Read More
The hack that could make face recognition think someone else is you
Researchers have demonstrated that they can fool a modern face recognition system into seeing someone who isn’t there.
A team from the cybersecurity firm McAfee set up the attack against a facial recognition system similar to those currently used at airports for passport verification. By using machine learning, they created an image that looked like one person to the human eye, but was identified as somebody else by the face recognition algorithm—the equivalent of tricking the machine into allowing someone to board a flight despite being on a no-fly list. Read More
Hackers Broke Into Real News Sites to Plant Fake Stories
Over the past few years, online disinformation has taken evolutionary leaps forward, with the Internet Research Agency pumping out artificial outrage on social media and hackers leaking documents—both real and fabricated—to suit their narrative. More recently, Eastern Europe has faced a broad campaign that takes fake news ops to yet another level: hacking legitimate news sites to plant fake stories, then hurriedly amplifying them on social media before they’re taken down.
On Wednesday, security firm FireEye released a report on a disinformation-focused group it’s calling Ghostwriter. The propagandists have created and disseminated disinformation since at least March 2017, with a focus on undermining NATO and the US troops in Poland and the Baltics; they’ve posted fake content on everything from social media to pro-Russian news websites. In some cases, FireEye says, Ghostwriter has deployed a bolder tactic: hacking the content management systems of news websites to post their own stories. They then disseminate their literal fake news with spoofed emails, social media, and even op-eds the propagandists write on other sites that accept user-generated content. Read More
Deepfakes ranked as most serious AI crime threat
Fake audio or video content has been ranked by experts as the most worrying use of artificial intelligence in terms of its potential applications for crime or terrorism, according to a new UCL report.
The study, published in Crime Science and funded by the Dawes Centre for Future Crime at UCL (and available as a policy briefing), identified 20 ways AI could be used to facilitate crime over the next 15 years. Read More
CAI Achieves Milestone: White Paper Sets the Standard for Content Attribution
Today marks a significant milestone for the Content Authenticity Initiative (“CAI”) as we publish our white paper, “Setting the Standard for Content Attribution”. It addresses the mounting challenges of inauthentic media and our proposal for an industry-standard content attribution solution that will enable creators to securely attach their identity and other information to their work before they share it with the world. Read More
AI-Generated Text Is the Scariest Deepfake of All
When pundits and researchers tried to guess what sort of manipulation campaigns might threaten the 2018 and 2020 elections, misleading AI-generated videos often topped the list. Though the tech was still emerging, its potential for abuse was so alarming that tech companies and academic labs prioritized working on, and funding, methods of detection. Social platforms developed special policies for posts containing “synthetic and manipulated media,” in hopes of striking the right balance between preserving free expression and deterring viral lies. But now, with about three months to go until November 3, that wave of deepfaked moving images seems never to have broken. Instead, another form of AI-generated media is making headlines, one that is harder to detect and yet much more likely to become a pervasive force on the internet: deepfake text.
Last month brought the introduction of GPT-3, the next frontier of generative writing: an AI that can produce shockingly human-sounding (if at times surreal) sentences. As its output becomes ever more difficult to distinguish from text produced by humans, one can imagine a future in which the vast majority of the written content we see on the internet is produced by machines. If this were to happen, how would it change the way we react to the content that surrounds us? Read More
FoolChecker: A platform to check how robust an image is against adversarial attacks
Deep neural networks (DNNs) have so far proved to be highly promising for a wide range of applications, including image and audio classification. Nonetheless, their performance heavily relies on the amount of data used to train them, and large datasets are not always readily available.
When DNNs are not adequately trained, they are more prone to misclassifying data. This makes them vulnerable to a particular class of cyber-attacks known as adversarial attacks. In an adversarial attack, an attacker creates replicas of real data that are designed to fool a DNN (i.e., adversarial data), tricking it into misclassifying data and thus impairing its function.
In recent years, computer scientists and developers have proposed a variety of tools that could protect deep neural architectures from these attacks, by detecting the differences between original and adversarial data. However, so far, none of these solutions has proved universally effective. Read More
This spooky deepfake AI mimics dozens of celebs and politicians
The voice sounds oddly familiar, like I’ve heard it a thousand times before — and I have. Indeed, it sounds just like Sir David Attenborough. But it’s not him. It’s not a person at all.
It’s simply a piece of AI software called Vocodes. The tool, which I can best describe as a deepfake generator, can mimic the voices of a slew of politicians and celebrities including Donald Trump, Barack Obama, Bryan Cranston, Danny Devito, and a dozen more. Read More
From virtual Lolitas to extreme sex, deepfake porn is blurring the lines of consent and reality
Exploring the dark, liberating, and potentially catastrophic future of technology’s freakiest frontier.
…In this time of creeping incertitude and simmering distrust of news, the potential power of convincing, well-wrought, virtually undetectable deepfakes rightly raises a shuddering horror. Read More