Researchers Want to Protect Your Selfies From Facial Recognition

‘Fawkes’ may be the most advanced system yet for fooling facial recognition tech like Clearview AI—until the algorithms catch up.

Researchers have created what may be the most advanced system yet for tricking top-of-the-line facial recognition algorithms, subtly modifying images to make faces and other objects unrecognizable to machines. Read More

#fake, #surveillance

Deepfakes Are Becoming the Hot New Corporate Training Tool

This month, advertising giant WPP will send unusual corporate training videos to tens of thousands of employees worldwide. A presenter will speak in the recipient’s language and address them by name, while explaining some basic concepts in artificial intelligence. The videos themselves will be powerful demonstrations of what AI can do: The face, and the words it speaks, will be synthesized by software.

WPP doesn’t bill them as such, but its synthetic training videos might be called deepfakes, a loose term applied to images or videos generated using AI that look real. Read More

#fake, #training

Deepfake used to attack activist couple shows new disinformation frontier

Oliver Taylor, a student at England’s University of Birmingham, is a twenty-something with brown eyes, light stubble, and a slightly stiff smile.

Online profiles describe him as a coffee lover and politics junkie who was raised in a traditional Jewish home. His half dozen freelance editorials and blog posts reveal an active interest in anti-Semitism and Jewish affairs, with bylines in the Jerusalem Post and the Times of Israel.

The catch? Oliver Taylor seems to be an elaborate fiction. Read More

#fake, #image-recognition

Covid Drives Real Businesses to Tap Deepfake Technology

This month, advertising giant WPP will send unusual corporate training videos to tens of thousands of employees worldwide. A presenter will speak in the recipient’s language and address them by name, while explaining some basic concepts in artificial intelligence. The videos themselves will be powerful demonstrations of what AI can do: The face, and the words it speaks, will be synthesized by software.

WPP doesn’t bill them as such, but its synthetic training videos might be called deepfakes, a loose term applied to images or videos generated using AI that look real. Read More

#fake

The Latest and Greatest AI-Enabled Deepfake Takes us ‘Back to the Future’

YouTube Link

With well over 6 million views since its mid-February release, YouTuber EZRyderX47’s Back to the Future deepfake video, with Robert Downey Jr. and Tom Holland seamlessly replacing Christopher Lloyd and Michael J. Fox, has become quite the viral sensation. The video is brilliantly done, from the lip-sync to the anything but uncanny eyes; the choice of films, and clip, was inspired as well, a welcome window into a new riff on a Hollywood classic. Produced using two readily available pieces of free software – HitFilm Express, from FXhome, and Deepfacelab – the startingly believable piece instantly conjures up all sorts of notions, both wonderful and sinister, regarding the seemingly unlimited horizons of AI-enhanced digital technology. If today’s visual magicians can create any image with stunning photoreal clarity, what, dare we ask, can propogandists, criminals and other “bad” actors do with the same digital tools? Read More

#fake, #videos

Bring an Essence of Life to the Art

Read More

#fake, #videos

Fake images can fool autonomous cars, posing risks, Israeli researchers warn

Autonomous vehicles can be fooled by “phantom” images displayed on a road, wall or sign, causing them to unexpectedly brake or veer off course and making them vulnerable to attackers, Israeli researchers said.

Semi- and fully-autonomous cars perceive and respond to two-dimensional projections as real objects, according to researchers from the Ben-Gurion University of the Negev. Read More

#fake

This Technique Uses AI to Fool Other AIs

Artificial intelligence has made big strides recently in understanding language, but it can still suffer from an alarming, and potentially dangerous, kind of algorithmic myopia.

Research shows how AI programs that parse and analyze text can be confused and deceived by carefully crafted phrases. A sentence that seems straightforward to you or me may have a strange ability to deceive an AI algorithm. Read More

#fake, #trust

Phantom of the ADAS

The absence of deployed vehicular communication systems, which prevents the advanced driving assistance systems (ADASs) and autopilots of semi/fully autonomous cars to validate their virtual perception regarding the physical environment surrounding the car with a third party, has been exploited in various attacks suggested by researchers. Since the application of these attacks comes with a cost (exposure of the attacker’s identity), the delicate exposure vs. application balance has held, and attacks of this kind have not yet been encountered in the wild. In this paper, we investigate a new perceptual challenge that causes the ADASs and autopilots of semi/fully autonomous to consider depthless objects (phantoms) as real. We show how attackers can exploit this perceptual challenge to apply phantom attacks and change the abovementioned balance, without the need to physically approach the attack scene, by projecting a phantom via a drone equipped with a portable projector or by presenting a phantom on a hacked digital billboard that faces the Internet and is located near roads. We show that the car industry has not considered this type of attack by demonstrating the attack on today’s most advanced ADAS and autopilot technologies: Mobileye 630 PRO and the Tesla Model X, HW 2.5; our experiments show that when presented with various phantoms, a car’s ADAS or autopilot considers the phantoms as real objects, causing these systems to trigger the brakes, steer into the lane of oncoming traffic, and issue notifications about fake road signs. In order to mitigate this attack, we present a model that analyzes a detected object’s context, surface, and reflected light, which is capable of detecting phantoms with 0.99 AUC. Finally, we explain why the deployment of vehicular communication systems might reduce attackers’ opportunities to apply phantom attacks but won’t eliminate them. Read More

#fake

Phantom Attacks Against Advanced Driving Assistance Systems

Read More

#fake, #videos