Google says new cloud-based “Private AI Compute” is just as secure as local processing

Google’s current mission is to weave generative AI into as many products as it can, getting everyone accustomed to, and maybe even dependent on, working with confabulatory robots. That means it needs to feed the bots a lot of your data, and that’s getting easier with the company’s new Private AI Compute. Google claims its new secure cloud environment will power better AI experiences without sacrificing your privacy.

The pitch sounds a lot like Apple’s Private Cloud Compute. Google’s Private AI Compute runs on “one seamless Google stack” powered by the company’s custom Tensor Processing Units (TPUs). These chips have integrated secure elements, and the new system allows devices to connect directly to the protected space via an encrypted link.

Google’s TPUs rely on an AMD-based Trusted Execution Environment (TEE) that encrypts and isolates memory from the host. Theoretically, that means no one else—not even Google itself—can access your data. Google says independent analysis by NCC Group shows that Private AI Compute meets its strict privacy guidelines. — Read More

#privacy

VaultGemma: The world’s most capable differentially private LLM

As AI becomes more integrated into our lives, building it with privacy at its core is a critical frontier for the field. Differential privacy (DP) offers a mathematically robust solution by adding calibrated noise to prevent memorization. However, applying DP to LLMs introduces trade-offs. Understanding these trade-offs is crucial. Applying DP noise alters traditional scaling laws — rules describing performance dynamics — by reducing training stability (the model’s ability to learn consistently without experiencing catastrophic events like loss spikes or divergence) and significantly increasing batch size (a collection of input prompts sent to the model simultaneously for processing) and computation costs.

Our new research, “Scaling Laws for Differentially Private Language Models”, conducted in partnership with Google DeepMind, establishes laws that accurately model these intricacies, providing a complete picture of the compute-privacy-utility trade-offs. Guided by this research, we’re excited to introduce VaultGemma, the largest (1B-parameters), open model trained from scratch with differential privacy. We are releasing the weights on Hugging Face and Kaggle, alongside a technical report, to advance the development of the next generation of private AI. — Read More

#privacy

I really don’t like ChatGPT’s new memory dossier

Last month ChatGPT got a major upgrade. As far as I can tell the closest to an official announcement was this tweet from @OpenAI:

Starting today [April 10th 2025], memory in ChatGPT can now reference all of your past chats to provide more personalized responses, drawing on your preferences and interests to make it even more helpful for writing, getting advice, learning, and beyond.

This memory FAQ document has a few more details, including that this “Chat history” feature is currently only available to paid accounts:

 Saved  memories and Chat history are offered only to Plus and Pro accounts. Free‑tier users have access to Saved  memories only.

This makes a huge difference to the way ChatGPT works: it can now behave as if it has recall over prior conversations, meaning it will be continuously customized based on that previous history. — Read More

#privacy

Personal AI Assistants and Privacy

Microsoft is trying to create a personal digital assistant:

At a Build conference event on Monday, Microsoft revealed a new AI-powered feature called “Recall” for Copilot+ PCs that will allow Windows 11 users to search and retrieve their past activities on their PC. To make it work, Recall records everything users do on their PC, including activities in apps, communications in live meetings, and websites visited for research. Despite encryption and local storage, the new feature raises privacy concerns for certain Windows users. — Read More

#privacy

Building end-to-end security for Messenger

Today, we’re announcing that we’ve begun to upgrade people’s personal conversations on Messenger to use E2EE by default. Our aim is to ensure that everyone’s personal messages on Messenger can only be accessed by the sender and the intended recipients, and that everyone can be sure the messages they receive are from an authentic sender.

Meta is publishing two technical white papers on end-to-end encryption

Read More

#big7, #privacy

OpenAI’s hunger for data is coming back to bite it

The company’s AI services may be breaking data protection laws, and there is no resolution in sight.

OpenAI has just over a week to comply with European data protection laws following a temporary ban in Italy and a slew of investigations in other EU countries. If it fails, it could face hefty fines, be forced to delete data, or even be banned. 

But experts have told MIT Technology Review that it will be next to impossible for OpenAI to comply with the rules. That’s because of the way data used to train its AI models has been collected: by hoovering up content off the internet.  Read More

#privacy

Privacy Violations Shutdown OpenAI ChatGPT and Beg Investigation

File this under ClosedAI.

ChatGPT for a long time on March 20th posted a giant orange warning on top of their interface that they’re unable to load chat history.

After a while it switched to this more subtle one, still disappointing

… their status page has been intentionally vague about privacy violations that caused the history feature to be immediately pulled.

… All that being said, they’re not being very open about the fact that chat users were seeing other users’ chat history. This level of privacy nightmare is kind of a VERY BIG DEAL. Read More

#privacy

TSA now wants to scan your face at security. Here are your rights.

16 major domestic airports are testing facial recognition tech to verify IDs — and it could go nationwide in 2023

Next time you’re at airport security, get ready to look straight into a camera. The TSA wants to analyze your face.

The Transportation Security Administration has been quietly testing controversial facial recognition technology for passenger screening at 16 major domestic airports — from Washington to Los Angeles — and hopes to expand it across the United States as soon as next year. Kiosks with cameras are doing a job that used to be completed by humans: checking the photos on travelers’ IDs to make sure they’re not impostors. Read More

#privacy, #surveillance

IARPA Kicks off Research Into Linguistic Fingerprint Technology

The Intelligence Advanced Research Projects Activity (IARPA), the research and development arm of the Office of the Director of National Intelligence, today announced the launch of a program that seeks to engineer novel artificial intelligence technologies capable of attributing authorship and protecting authors’ privacy.

The Human Interpretable Attribution of Text Using Underlying Structure (HIATUS) program represents the Intelligence Community’s latest research effort to advance human language technology. The resulting innovations could have far-reaching impacts, with the potential to counter foreign malign influence activities; identify counterintelligence risks; and help safeguard authors who could be endangered if their writing is connected to them. Read More

#privacy, #ic

Why Lockdown mode from Apple is one of the coolest security ideas ever

Apple intros “extreme” optional protection against the scourge of mercenary spyware.

Mercenary spyware is one of the hardest threats to combat. It targets an infinitesimally small percentage of the world, making it statistically unlikely for most of us to ever see it. And yet, because the sophisticated malware only selects the most influential individuals (think diplomats, political dissidents, and lawyers), it has a devastating effect that’s far out of proportion to the small number of people infected.

This puts device and software makers in a bind. How do you build something to protect what’s likely well below 1 percent of your user base against malware built by companies like NSO Group, maker of clickless exploits that instantly convert fully updated iOS and Android devices into sophisticated bugging devices?

On Wednesday, Apple previewed an ingenious option it plans to add to its flagship OSes in the coming months to counter the mercenary spyware menace. The company is upfront—almost in your face—that Lockdown mode is an option that will degrade the user experience and is intended for only a small number of users. Read More

#cyber, #privacy, #surveillance