At first glance, Renée DiResta thought the LinkedIn message seemed normal enough.
The sender, Keenan Ramsey, mentioned that they both belonged to a LinkedIn group for entrepreneurs. She punctuated her greeting with a grinning emoji before pivoting to a pitch for software.
“Quick question — have you ever considered or looked into a unified approach to message, video, and phone on any device, anywhere?”
DiResta wasn’t interested and would have ignored the message entirely, but then she looked closer at Ramsey’s profile picture. Little things seemed off in what should have been a typical corporate headshot. Ramsey was wearing only one earring. Bits of her hair disappeared and then reappeared. Her eyes were aligned right in the middle of the image. Read More
Tag Archives: Fake
Deepfake Zelenskyy surrender video is the ‘first intentionally used’ in Ukraine war
A manipulated video of Ukrainian President Volodymyr Zelenskyy calling on citizens to surrender to Russia has been shared online.
The false video appears to show Zelenskyy addressing the nation and encouraging citizens to “lay down arms”.
One version of the “deepfake” was viewed more than 120,000 times on Twitter. Read More
People Trust Deepfake Faces Generated by AI More Than Real Ones, Study Finds
The proliferation of deepfake technology is raising concerns that AI could start to warp our sense of shared reality. New research suggests AI-synthesized faces don’t simply dupe us into thinking they’re real people, we actually trust them more than our fellow humans.
In 2018, Nvidia wowed the world with an AI that could churn out ultra-realistic photos of people that don’t exist. Its researchers relied on a type of algorithm known as a generative adversarial network (GAN), which pits two neural networks against each other, one trying to spot fakes and the other trying to generate more convincing ones. Given enough time, GANS can generate remarkably good counterfeits.
Since then, capabilities have improved considerably, with some worrying implications: enabling scammers to trick people, making it possible to splice people into porn movies without their consent, and undermining trust in online media. While it’s possible to use AI itself to spot deepfakes, tech companies’ failures to effectively moderate much less complicated material suggests this won’t be a silver bullet. Read More
AI-synthesized faces are indistinguishable from real faces and more trustworthy
Artificial intelligence (AI)–synthesized text, audio, image, and video are being weaponized for the purposes of nonconsensual intimate imagery, financial fraud, and disinformation campaigns. Our evaluation of the photorealism of AI-synthesized faces indicates that synthesis engines have passed through the uncanny valley and are capable of creating faces that are indistinguishable—and more trustworthy—than real faces. Read More
Fake It Till You Make It
We demonstrate that it is possible to perform face-related computer vision in the wild using synthetic data alone.
The community has long enjoyed the benefits of synthesizing training data with graphics, but the domain gap between real and synthetic data has remained a problem, especially for human faces. Researchers have tried to bridge this gap with data mixing, domain adaptation, and domain-adversarial training, but we show that it is possible to synthesize data with minimal domain gap, so that models trained on synthetic data generalize to real in-the-wild datasets.
We describe how to combine a procedurally-generated parametric 3D face model with a comprehensive library of hand-crafted assets to render training images with unprecedented realism and diversity. We train machine learning systems for face-related tasks such as landmark localization and face parsing, showing that synthetic data can both match real data in accuracy as well as open up new approaches where manual labelling would be impossible. Read More
Dataset
Microsoft’s AI Understands Humans…But It Had Never Seen One!
This Person (Probably) Exists. IdentityMembership Attacks Against GAN GeneratedFaces.
Recently, generative adversarial networks (GANs) have achieved stunning realism, fooling even human observers. Indeed, the popular tongue-in-cheek website http://thispersondoesnotexist.com, taunts users with GAN generated images that seem too real to believe. On the otherhand, GANs do leak information about their training data, as evidenced by membership attacks recently demonstrated in the literature. In this work, we challenge the assumption that GAN faces really are novel creations, by constructing a successful membership attack of a new kind. Unlike previous works, our attack can accurately discern samples sharing the same identity as training samples without being the same samples. We demonstrate the interest of our attack across several popular face datasets and GAN training procedures. Notably, we show that even in the presence of significant dataset diversity, an over represented person can pose a privacy concern. Read More
Synthetic Media: How deepfakes could soon change our world
A way to spot computer-generated faces
A small team of researchers from The State University of New York at Albany, the State University of New York at Buffalo and Keya Medical has found a common flaw in computer-generated faces by which they can be identified. The group has written a paper describing their findings and have uploaded them to the arXiv preprint server.
…The researchers note that in many cases, users can simply zoom in on the eyes of a person they suspect may not be real to spot the pupil irregularities. They also note that it would not be difficult to write software to spot such errors and for social media sites to use it to remove such content. Unfortunately, they also note that now that such irregularities have been identified, the people creating the fake pictures can simply add a feature to ensure the roundness of pupils. Read More
Warner Bros. ‘Reminiscence’ promo uses deepfake tech to put you in the trailer
If you want to see yourself on screen with Hugh Jackman, this is your chance. The promo for Warner Bros. upcoming Reminiscence movie uses deepfake technology to turn a photo of your face — or anybody’s face, really — into a short video sequence with the star. According to Protocol, a media startup called D-ID created the promo for the film. D-ID reportedly started out wanting to develop technology that can protect consumers against facial recognition, but then it realized that its tech could also be used to optimize deepfakes.
For this particular project, the firm created a website for the experience, where you’ll be asked for your name and for a photo. You can upload the photo of anybody you want, and the experience will then conjure up an animation for the face in it. The animation isn’t perfect by any means, and the face could look distorted at times, but it’s still not bad, considering the technology created it from a single picture. Read More