Am I a Real or Fake Celebrity?

Recently, significant advancements have been made in face recognition technologies using Deep Neural Networks. As a result, companies such as Microsoft, Amazon, and Naver offer highly accurate commercial face recognition web services for diverse applications to meet the end-user needs. Naturally, however, such technologies are threatened persistently, as virtually any individual can quickly implement impersonation attacks. In particular, these attacks can be a significant threat for authentication and identification services, which heavily rely on their underlying face recognition technologies’ accuracy and robustness. Despite its gravity, the issue regarding deepfake abuse using commercial web APIs and their robustness has not yet been thoroughly investigated. This work provides a measurement study on the robustness of black-box commercial face recognition APIs against Deepfake Impersonation (DI) attacks using celebrity recognition APIs as an example case study. We use five deepfake datasets, two of which are created by us and planned to be released. More specifically, we measure attack performance based on two scenarios (targeted and non-targeted) and further analyze the differing system behaviors using fidelity, confidence, and similarity metrics. Accordingly, we demonstrate how vulnerable face recognition technologies from popular companies are to DI attack, achieving maximum success rates of 78.0% and 99.9% for targeted (i.e., precise match) and non-targeted (i.e., match with any celebrity) attacks, respectively. Moreover, we propose practical defense strategies to mitigate DI attacks, reducing the attack success rates to as low as 0% and 0.02% for targeted and non-targeted attacks, respectively. Read More

#fake, #image-recognition

Tom Cruise deepfake creator says public shouldn’t be worried about ‘one-click fakes’

19

Weeks of work and a top impersonator were needed to make the viral clips

When a series of spookily convincing Tom Cruise deepfakes went viral on TikTok, some suggested it was a chilling sign of things to come — harbinger of an era where AI will let anyone make fake videos of anyone else. The video’s creator, though, Belgium VFX specialist Chris Ume, says this is far from the case. Speaking to The Verge about his viral clips, Ume stresses the amount of time and effort that went into making each deepfake, as well as the importance of working with a top-flight Tom Cruise impersonator, Miles Fisher.

“You can’t do it by just pressing a button,” says Ume. “That’s important, that’s a message I want to tell people.” Each clip took weeks of work, he says, using the open-source DeepFaceLab algorithm as well as established video editing tools. “By combining traditional CGI and VFX with deepfakes, it makes it better. I make sure you don’t see any of the glitches.” Read More

#fake, #image-recognition

Self-supervised Pretraining of Visual Features in the Wild

Recently,self-supervised learning methods like MoCo [22], SimCLR [8], BYOL [20] and SwAV [7] have reduced the gap with supervised methods.These results have been achieved in a control environment, that is the highly curated ImageNet dataset. However, the premise of self-supervised learning is that it can learn from any random image and from any unbounded dataset. In this work, we explore if self-supervision lives to its expectation by training large models on random, uncurated images with no supervision. Our final SElf-supERvised (SEER) model,a RegNetY with 1.3B parameters trained on 1B random images with 512 GPUs achieves 84.2% top-1 accuracy,surpassing the best self-supervised pretrained model by 1%and confirming that self-supervised learning works in areal world setting. Interestingly, we also observe that self-supervised models are good few-shot learners achieving77.9% top-1 with access to only 10% of ImageNet. Read More

#big7, #image-recognition

Intel and EXOS Pilot 3D Athlete Tracking with Pro Football Hopefuls

Read More

#image-recognition, #videos

China’s ‘Sharp Eyes’ Program Aims to Surveil 100% of Public Space

One of China’s largest and most pervasive surveillance networks got its start in a small county about seven hours north of Shanghai.

Sharp Eyes is one of a number of overlapping and intersecting technological surveillance projects built by the Chinese government over the last two decades. Projects like the Golden Shield Project, Safe Cities, SkyNet, Smart Cities, and now Sharp Eyes mean that there are more than 200 million public and private security cameras installed across China. Read More

#china, #surveillance

12 Ways to Hack 2FA

Read More

#cyber, #videos

The AI Index Report: Measuring trends in Artificial Intelligence

Stanford University’s Institute for Human-Centered Artificial Intelligence (HAI)’s fourth annual report found that after surpassing the US in the total number of journal publications several years ago, China now also leads in journal citations. The report also found that the technologies necessary for large-scale surveillance are rapidly maturing, with techniques for image classification, face recognition, video analysis, and voice identification all seeing significant progress in 2020. Read More

#china-vs-us, #surveillance

AI Moving to the Edge

As edge computing demands increase, major cloud providers are announcing solutions to fill that need: Google with Coral, Amazon with Panorama, and now Microsoft with Percept. As Microsoft’s John Roach said, there “millions of scenarios becoming possible thanks to a combination of artificial intelligence and computing on the edge. Standalone edge devices can take advantage of AI tools for things like translating text or recognizing images without having to constantly access cloud computing capabilities.” Read More

#iot, #big7

U.S. Unprepared for AI Competition with China, Commission Finds

Retaining any edge will take White House leadership and a substantial investment, according to the National Security Commission on Artificial Intelligence.

The National Security Commission on Artificial Intelligence is out with its comprehensive final report recommending a path forward for ensuring U.S. superiority in AI that calls for the Defense Department and the intelligence community to become “AI-ready” by 2025. 

NSCAI on Monday during a public meeting voted to approve its final report, which will also be sent to Congress. The report culminates two years of work that began after the 2019 National Defense Authorization Act established the commission to review advances in AI, machine learning and associated technologies.  Read More

#dod, #ic

Google’s Model Search automatically optimizes and identifies AI models

Google today announced the release of Model Search, an open source platform designed to help researchers develop machine learning models efficiently and automatically. Instead of focusing on a specific domain, Google says that Model Search is domain-agnostic, making it capable of finding a model architecture that fits a dataset and problem while minimizing coding time and compute resources. Read More

#big7, #frameworks