Corsight plans to release a new product that combines DNA and face recognition technology and could have significant law enforcement and privacy implications.
In this report, we examine Corsight’s product roadmap for “DNA to FACE,” presented at the 2021 Imperial Capital Investors Conference, possible use cases for the technology, and warnings from a privacy expert.
IPVM collaborated with MIT Technology Review on this report, see the MIT Technology Review article: This company says it’s developing a system that can recognize your face from just your DNA Read More
Tag Archives: Privacy
A Perspective on Americans’ Attitudes Toward Artificial Intelligence
Research reveals Americans have fears and concerns about AI while embracing a larger role for AI in everyday life. That’s according to the Stevens TechPulse Report: A Perspective on Americans’ Attitudes Toward Artificial Intelligence, a new national poll of 2,200 adults conducted on behalf of Stevens Institute of Technology by Morning Consult. The survey examined Americans’ views on a wide range of AI-related issues. Read the news release.
“As the world and our lives grow increasingly dependent on artificial intelligence, it’s essential to assess its perceived impact, as well as identify gaps in knowledge that need to be addressed,” said Jason Corso, Ph.D., Brinning Professor of Computer Science and Director of Stevens Institute for Artificial Intelligence at Stevens Institute of Technology. “It’s clear from this research that, while people recognize the positives of AI, they also see much to be wary of — based, to some extent, on misunderstandings of the technology and what could help protect against those negative consequences.” Read More
The Fight to Define When AI Is ‘High Risk’
Everyone from tech companies to churches wants a say in how the EU regulates AI that could harm people.
PEOPLE SHOULD NOT be slaves to machines, a coalition of evangelical church congregations from more than 30 countries preached to leaders of the European Union earlier this summer.
The European Evangelical Alliance believes all forms of AI with the potential to harm people should be evaluated, and AI with the power to harm the environment should be labeled high risk, as should AI for transhumanism, the alteration of people with tech like computers or machinery. It urged members of the European Commission for more discussion of what’s “considered safe and morally acceptable” when it comes to augmented humans and computer-brain interfaces. Read More
Practical Privacy with Synthetic Data
In this post, we will implement a practical attack on synthetic data models that was described in the Secret Sharer: Evaluating and Testing Unintended Memorization in Neural Networks by Nicholas Carlini et. al. We will use this attack to see how synthetic data models with various neural network and differential privacy parameter settings actually work at protecting sensitive data and secrets in datasets. And there are some pretty surprising results. Read More
Google starts trialing its FLoC cookie alternative in Chrome
Google today announced that it is rolling out Federated Learning of Cohorts (FLoC), a crucial part of its Privacy Sandbox project for Chrome, as a developer origin trial.
FLoC is meant to be an alternative to the kind of cookies that advertising technology companies use today to track you across the web. Instead of a personally identifiable cookie, FLoC runs locally and analyzes your browsing behavior to group you into a cohort of like-minded people with similar interests (and doesn’t share your browsing history with Google). That cohort is specific enough to allow advertisers to do their thing and show you relevant ads, but without being so specific as to allow marketers to identify you personally.
This “interest-based advertising,” as Google likes to call it, allows you to hide within the crowd of users with similar interests. All the browser displays is a cohort ID and all your browsing history and other data stay locally. Read More
Why Some Models Leak Data
Machine learning models use large amounts of data, some of which can be sensitive. If they’re not trained correctly, sometimes that data is inadvertently revealed.
… Models of real world data are often quite complex—this can improve accuracy, but makes them more susceptible to unexpectedly leaking information. Medical models have inadvertently revealed patients’ genetic markers. Language models have memorized credit card numbers. Faces can even be reconstructed from image models.
… Training models with differential privacy stops the training data from leaking by limiting how much the model can learn from any one data point. Differentially private models are still at the cutting edge of research, but they’re being packaged into machine learning frameworks, making them much easier to use. When it isn’t possible to train differentially private models, there are also tools that can measure how much data is the model memorizing. Read More
InstaHide: Instance-hiding Schemes for Private Distributed Learning
How can multiple distributed entities collaboratively train a shared deep net on their private data while preserving privacy? This paper introduces InstaHide, a simple encryption of training images, which can be plugged into existing distributed deep learning pipelines. The encryption is efficient and applying it during training has minor effect on test accuracy.
InstaHide encrypts each training image with a “one-time secret key” which consists of mixing a number of randomly chosen images and applying a random pixel-wise mask. Other contributions of this paper include: (a) Using a large public dataset (e.g. ImageNet) for mixing during its encryption, which improves security. (b) Experimental results to show effectiveness in preserving privacy against known attacks with only minor effects on accuracy. (c)Theoretical analysis showing that successfully attacking privacy requires attackers to solve a difficult computational problem. (d) Demonstrating that use of the pixel-wise mask is important for security, since Mixupalone is shown to be insecure to some some efficient at-tacks. (e) Release of a challenge dataset1to encourage new attacks. Read More
Privacy Preserving Machine Learning: Threats and Solutions
For privacy concerns to be addressed adequately in today’s machine learning systems, the knowledge gap between the machine learning and privacy communities must be bridged. This article aims to provide an introduction to the intersection of both fields with special emphasis on the techniques used to protect the data. Read More
Chinese-Made Smartphones Are Secretly Stealing Money From People Around The World
Preinstalled malware on low-cost Chinese phones has stolen data and money from some of the world’s poorest people. Read More
Identity Recognition Based on Bioacoustics of Human Body
Current biometrics rely on images obtained from the structural information of physiological characteristics, which is inherently a fatal problem of being vulnerable to spoofing. Here,we studied personal identification using the frequency-domain information based on human body vibration. We developed a bioacoustic frequency spectroscopy system and applied it to the fingers to obtain information on the anatomy, biomechanics, and biomaterial properties of the tissues. As a result, modulated microvibrations propagated through our body could capture a unique spectral trait of a person and the biomechanical transfer characteristics persisted for two months and resulted in 97.16%accuracy of identity authentication in 41 subjects. Ultimately, our method not only eliminates the practical means of creating fake copies of the relevant characteristics but also provides reliable features. Read More