State media and new regulations are going after dodgy company practices. Government still gets a free pass.
Hundreds of thousands of surveillance cameras throughout China have been hoovering up facial recognition data without notifying the people attached to the faces. Now, the companies behind the tech are finally under the microscope after a blistering recent exposé — one carried by a major mouthpiece for Beijing, the same government known for its own untrammeled intrusions into private life.
On March 15, China Central Television broadcast its annual consumer rights gala, a long-running annual special that has uncovered many high-level consumer frauds. One segment revealed that facial recognition security cameras located at chain stores nationwide have been picking up shoppers’ personal information without their knowledge or consent. The revelations ignited a furious backlash against the companies. “Two characters,” went one popular online comment about the findings: “wu chi (无耻),” meaning “shameless.” It’s another instance of grassroots pushback against surveillance tech in China, a global leader in surveillance research as well as in deployment.
The central irony went unremarked: that Beijing has become both the critic and perpetrator of mass surveillance. Read More
Daily Archives: March 19, 2021
The great data decentralization is coming — are you ready?
The move to cloud computing is one of the most important technology shifts of our generation. Along with it, the decades-long push to centralize data storage in a single warehouse is coming to an end, as dumping everything into a “data lake” has caused more harm than good.
For some applications, centralizing data via cloud storage solutions such as Amazon S3 and Snowflake works to an extent (read: Snowflake’s IPO). At the same time, several major factors are creating greater data decentralization. Here are three of the biggest. Read More
Your Face Is Not Your Own
When a secretive start-up scraped the internet to build a facial-recognition tool, it tested a legal and ethical limit — and blew the future of privacy in America wide open.
In May 2019, an agent at the Department of Homeland Security received a trove of unsettling images. Found by Yahoo in a Syrian user’s account, the photos seemed to document the sexual abuse of a young girl. One showed a man with his head reclined on a pillow, gazing directly at the camera. The man appeared to be white, with brown hair and a goatee, but it was hard to really make him out; the photo was grainy, the angle a bit oblique. The agent sent the man’s face to child-crime investigators around the country in the hope that someone might recognize him.
When an investigator in New York saw the request, she ran the face through an unusual new facial-recognition app she had just started using, called Clearview AI. The team behind it had scraped the public web — social media, employment sites, YouTube, Venmo — to create a database with three billion images of people, along with links to the webpages from which the photos had come. This dwarfed the databases of other such products for law enforcement, which drew only on official photography like mug shots, driver’s licenses and passport pictures; with Clearview, it was effortless to go from a face to a Facebook account. Read More