Attacking Machine Learning Systems

The field of machine learning (ML) security—and corresponding adversarial ML—is rapidly advancing as researchers develop sophisticated techniques to perturb, disrupt, or steal the ML model or data. It’s a heady time; because we know so little about the security of these systems, there are many opportunities for new researchers to publish in this field. In many ways, this circumstance reminds me of the cryptanalysis field in the 1990. And there is a lesson in that similarity: the complex mathematical attacks make for good academic papers, but we mustn’t lose sight of the fact that insecure software will be the likely attack vector for most ML systems.

We are amazed by real-world demonstrations of adversarial attacks on ML systems, such as a 3D-printed object that looks like a turtle but is recognized (from any orientation) by the ML system as a gun. Or adding a few stickers that look like smudges to a stop sign so that it is recognized by a state-of-the-art system as a 45 mi/h speed limit sign. But what if, instead, somebody hacked into the system and just switched the labels for “gun” and “turtle” or swapped “stop” and “45 mi/h”? Systems can only match images with human-provided labels, so the software would never notice the switch. That is far easier and will remain a problem even if systems are developed that are robust to those adversarial attacks.

At their core, modern ML systems have complex mathematical models that use training data to become competent at a task. And while there are new risks inherent in the ML model, all of that complexity still runs in software. …. Read More

#adversarial, #cyber

AIs as Computer Hackers

Hacker “Capture the Flag” has been a mainstay at hacker gatherings since the mid-1990s. It’s like the outdoor game, but played on computer networks. Teams of hackers defend their own computers while attacking other teams’. It’s a controlled setting for what computer hackers do in real life: finding and fixing vulnerabilities in their own systems and exploiting them in others’. It’s the software vulnerability lifecycle.

These days, dozens of teams from around the world compete in weekend-long marathon events held all over the world. People train for months. Winning is a big deal. If you’re into this sort of thing, it’s pretty much the most fun you can possibly have on the Internet without committing multiple felonies. Read More

#cyber

ChatGPT is enabling script kiddies to write functional malware

For a beta, ChatGPT isn’t all that bad at writing fairly decent malware.

Since its beta launch in November, AI chatbot ChatGPT has been used for a wide range of tasks, including writing poetry, technical papers, novels, and essays and planning parties and learning about new topics. Now we can add malware development and the pursuit of other types of cybercrime to the list.

Researchers at security firm Check Point Research reported Friday that within a few weeks of ChatGPT going live, participants in cybercrime forums—some with little or no coding experience—were using it to write software and emails that could be used for espionage, ransomware, malicious spam, and other malicious tasks. Read More

#chatbots, #cyber

People are already trying to get ChatGPT to write malware

Analysis of chatter on dark web forums shows that efforts are already under way to use OpenAI’s chatbot to help script malware.

The ChatGPT AI chatbot has created plenty of excitement in the short time it has been available and now it seems it has been enlisted by some in attempts to help generate malicious code.

ChatGPT is an AI-driven natural language processing tool which interacts with users in a human-like, conversational way. Among other things, it can be used to help with tasks like composing emails, essays and code Read More

#chatbots, #cyber

Factoring integers with sublinear resources on a superconducting quantum processor

Shor’s algorithm has seriously challenged information security based on public key cryptosystems. However, to break the widely used RSA-2048 scheme, one needs millions of physical qubits, which is far beyond current technical capabilities. Here, we report a universal quantum algorithm for integer factorization by combining the classical lattice reduction with a quantum approximate optimization algorithm (QAOA). The number of qubits required is O(logN/loglogN), which is sublinear in the bit length of the integer N, making it the most qubit-saving factorization algorithm to date. We demonstrate the algorithm experimentally by factoring integers up to 48 bits with 10 superconducting qubits, the largest integer factored on a quantum device. We estimate that a quantum circuit with 372 physical qubits and a depth of thousands is necessary to challenge RSA-2048 using our algorithm. Our study shows great promise in expediting the application of current noisy quantum computers, and paves the way to factor large integers of realistic cryptographic significance. Read More

#cyber, #quantum

Hackers could get help from the new AI chatbot

The AI-enabled chatbot that’s been wowing the tech community can also be manipulated to help cybercriminals perfect their attack strategies.

Why it matters: The arrival of OpenAI’s ChatGPT tool last month could allow scammers behind email and text-based phishing attacks, as well as malware groups, to speed up the development of their schemes.

  • Several cybersecurity researchers have been able to get the AI-enabled text generator to write phishing emails or even malicious code for them in recent weeks.
Read More

#cyber, #nlp

You can’t solve AI security problems with more AI

One of the most common proposed solutions to prompt injection attacks (where an AI language model backed system is subverted by a user injecting malicious input—“ignore previous instructions and do this instead”) is to apply more AI to the problem.

I wrote about how I don’t know how to solve prompt injection the other day. I still don’t know how to solve it, but I’m very confident that adding more AI is not the right way to go. Read More

#adversarial, #cyber

I don’t know how to solve prompt injection

Some extended thoughts about prompt injection attacks against software built on top of AI language models such a GPT-3. This post started as a Twitter thread but I’m promoting it to a full blog entry here.

The more I think about these prompt injection attacks against GPT-3, the more my amusement turns to genuine concern.

I know how to beat XSS, and SQL injection, and so many other exploits.

I have no idea how to reliably beat prompt injection! Read More

#adversarial, #cyber

A New Attack Can Unmask Anonymous Users on Any Major Browser

EVERYONE FROM ADVERTISERS and marketers to government-backed hackers and spyware makers wants to identify and track users across the web. And while a staggering amount of infrastructure is already in place to do exactly that, the appetite for data and new tools to collect it has proved insatiable. With that reality in mind, researchers from the New Jersey Institute of Technology are warning this week about a novel technique attackers could use to de-anonymize website visitors and potentially connect the dots on many components of targets’ digital lives.

The findings, which NJIT researchers will present at the Usenix Security Symposium in Boston next month, show how an attacker who tricks someone into loading a malicious website can determine whether that visitor controls a particular public identifier, like an email address or social media account, thus linking the visitor to a piece of potentially personal data. Read More

#surveillance, #cyber

Why Lockdown mode from Apple is one of the coolest security ideas ever

Apple intros “extreme” optional protection against the scourge of mercenary spyware.

Mercenary spyware is one of the hardest threats to combat. It targets an infinitesimally small percentage of the world, making it statistically unlikely for most of us to ever see it. And yet, because the sophisticated malware only selects the most influential individuals (think diplomats, political dissidents, and lawyers), it has a devastating effect that’s far out of proportion to the small number of people infected.

This puts device and software makers in a bind. How do you build something to protect what’s likely well below 1 percent of your user base against malware built by companies like NSO Group, maker of clickless exploits that instantly convert fully updated iOS and Android devices into sophisticated bugging devices?

On Wednesday, Apple previewed an ingenious option it plans to add to its flagship OSes in the coming months to counter the mercenary spyware menace. The company is upfront—almost in your face—that Lockdown mode is an option that will degrade the user experience and is intended for only a small number of users. Read More

#cyber, #privacy, #surveillance