Bad Actors Are Joining the AI Revolution: Here’s What We’ve Found in the Wild

Movies and TV shows have taught us to associate computer hackers with difficult tasks, detailed plots, and elaborate schemes.

What security researcher Carlos Fernández and I have recently found on open-source registries tells a different story: bad actors are favoring simplicity, effectiveness, and user-centered thinking. And to take their malicious code to the next level, they’re also adding new features assisted by ChatGPT.

Just like software-as-a-service (SaaS), part of the reason why malware-as-a-service (MaaS) offerings such as DuckLogsRedline Stealer, and Racoon Stealer have become so popular in underground markets is that they have active customer support channels and their products tend to be slick and user-friendly. Check these boxes, fill out this form, click this button… Here’s your ready-to-use malware sample! Needless to say, these products are often built by professional cybercriminals. Read More

#cyber

DHS Announces First-Ever AI Task Force

On Friday, Department of Homeland Security Secretary Alejandro Mayorkas announced the formation of a new resource group focused solely on combating negative repercussions of the widespread advent of artificial intelligence technologies.

The AI Task Force, unveiled during Mayorkas’s remarks before a Council on Foreign Relations event, will analyze adverse impacts surrounding generative AI systems such as ChatGPT as well as potential uses for the emerging technology.

… Some of the focal points of the AI Task Force highlighted by DHS include integrating AI in supply chain and border trade management, countering the flow of fentanyl into the U.S., and applying AI to digital forensic tools to counter child exploitation and abuse.  Read More

#cyber, #ic

A Cryptographic Near Miss

Go 1.20.2 fixed a small vulnerability in the crypto/elliptic package. The impact was minor, to the point that I don’t think any application was impacted, but the issue was interesting to look at as a near-miss, and to learn from, teaching that while assumptions might be valid now, they aren’t guaranteed to be valid in the future. Read More

#cyber

New study shows how scary fast today’s AI is at cracking passwords

Passwords with more than 18 characters were deemed safe from AI password-cracking tools.

  • Cybersecurity firm Home Security Heroes published a study about AI and its ability to crack passwords.
  • A new AI password cracking tool is capable of cracking a majority of six-character passwords and under instantly.
  • Passwords with 12 characters or more are considered tough to crack, for now.
Read More

#cyber

LLMs and Phishing

Here’s an experiment being run by undergraduate computer science students everywhere: Ask ChatGPT to generate phishing emails, and test whether these are better at persuading victims to respond or click on the link than the usual spam. It’s an interesting experiment, and the results are likely to vary wildly based on the details of the experiment.

But while it’s an easy experiment to run, it misses the real risk of large language models (LLMs) writing scam emails. Today’s human-run scams aren’t limited by the number of people who respond to the initial email contact. They’re limited by the labor-intensive process of persuading those people to send the scammer money. LLMs are about to change that. Read More

#cyber

Microsoft’s latest use for GPT4: Stopping hackers

The tech giant unveiled new cybersecurity software, escalating the arms race between defenders and hackers

Microsoft’s rapid campaign to integrate new artificial intelligence technology into its broad range of products continued Tuesday as the tech giant announced a new cybersecurity “co-pilot” meant to help companies track and defend against hacking attempts, upping the ante in the never-ending arms race between hackers and the cybersecurity professionals trying to keep them at bay.

It’s the latest salvo in Microsoft’s battle with Google and other tech companies to dominate the fast-growing field of “generative” AI, though it’s still unclear whether the flurry of product launches, demos and proclamations from executives will change the tech industry as dramatically as leaders are predicting. Read More

#cyber

Facebook accounts hijacked by new malicious ChatGPT Chrome extension

A trojanized version of the legitimate ChatGPT extension for Chrome is gaining popularity on the Chrome Web Store, accumulating over 9,000 downloads while stealing Facebook accounts.

The extension is a copy of the legitimate popular add-on for Chrome named “ChatGPT for Google” that offers ChatGPT integration on search results. However, this malicious version includes additional code that attempts to steal Facebook session cookies.

The publisher of the extension uploaded it to the Chrome Web Store on February 14, 2023, but only started promoting it using Google Search advertisements on March 14, 2023. Since then, it has had an average of a thousand installations per day. Read More

#cyber

ChatGPT Helped Win a Hackathon

A team from cybersecurity firm Claroty used the AI bot to write code to exploit vulnerabilities in industrial systems

The ChatGPT AI bot has spurred speculation about how hackers might use it and similar tools to attack faster and more effectively, though the more damaging exploits so far have been in laboratories.

In its current form, the ChatGPT bot from OpenAI, an artificial-intelligence startup backed by billions of dollars from Microsoft Corp., is mainly trained to digest and generate text. For security chiefs, that means bot-written phishing emails might be more convincing than, for example, messages from a hacker whose first language isn’t English. 

… Two security researchers from cybersecurity company Claroty Ltd. said ChatGPT helped them win the Zero Day Initiative’s hack-a-thon in Miami last month. Read More

#cyber

More than you’ve asked for: A Comprehensive Analysis of Novel Prompt Injection Threats to Application-Integrated Large Language Models

We are currently witnessing dramatic advances in the capabilities of Large Language Models (LLMs). They are already being adopted in practice and integrated into many systems, including integrated development environments (IDEs) and search engines. The functionalities of current LLMs can be modulated via natural language prompts, while their exact internal functionality remains implicit and unassessable. This property, which makes them adaptable to even unseen tasks, might also make them susceptible to targeted adversarial prompting. Recently, several ways to misalign LLMs using Prompt Injection (PI) attacks have been introduced. In such attacks, an adversary can prompt the LLM to produce malicious content or override the original instructions and the employed filtering schemes. Recent work showed that these attacks are hard to mitigate, as state-of-the-art LLMs are instruction-following. So far, these attacks assumed that the adversary is directly prompting the LLM. In this work, we show that augmenting LLMs with retrieval and API calling capabilities (so-called Application-Integrated LLMs) induces a whole new set of attack vectors. These LLMs might process poisoned content retrieved from the Web that contains malicious prompts pre-injected and selected by adversaries. We demonstrate that an attacker can indirectly perform such PI attacks. Based on this key insight, we systematically analyze the resulting threat landscape of Application-Integrated LLMs and discuss a variety of new attack vectors. To demonstrate the practical viability of our attacks, we implemented specific demonstrations of the proposed attacks within synthetic applications. In summary, our work calls for an urgent evaluation of current mitigation techniques and an investigation of whether new techniques are needed to defend LLMs against these threats. Read More

#chatbots, #cyber, #adversarial

Planting Undetectable Backdoors in Machine Learning Models

Given the computational cost and technical expertise required to train machine learning models, users may delegate the task of learning to a service provider. Delegation of learning has clear benefits, and at the same time raises serious concerns of trust. This work studies possible abuses of power by untrusted learners.We show how a malicious learner can plant an undetectable backdoor into a classifier. On the surface, such a backdoored classifier behaves normally, but in reality, the learner maintains a mechanism for changing the classification of any input, with only a slight perturbation. Importantly, without the appropriate “backdoor key,” the mechanism is hidden and cannot be detected by any computationally-bounded observer. We demonstrate two frameworks for planting undetectable backdoors, with incomparable guarantees.

  • First, we show how to plant a backdoor in any model, using digital signature schemes. The construction guarantees that given query access to the original model and the backdoored version, it is computationally infeasible to find even a single input where they differ. This property implies that the backdoored model has generalization error comparable with the original model. Moreover, even if the distinguisher can request backdoored inputs of its choice, they cannot backdoor a new input—a property we call non-replicability.
  • Second, we demonstrate how to insert undetectable backdoors in models trained using the Random Fourier Features (RFF) learning paradigm (Rahimi, Recht; NeurIPS 2007). In this construction, undetectability holds against powerful white-box distinguishers: given a complete description of the network and the training data, no efficient distinguisher can guess whether the model is “clean” or contains a backdoor. The backdooring algorithm executes the RFF algorithm faithfully on the given training data, tampering only with its random coins. We prove this strong guarantee under the hardness of the Continuous Learning With Errors problem (Bruna, Regev, Song, Tang; STOC 2021). We show a similar white-box undetectable backdoor for random ReLU networks based on the hardness of Sparse PCA (Berthet, Rigollet; COLT 2013).
Our construction of undetectable backdoors also sheds light on the related issue of robustness to adversarial examples. In particular, by constructing undetectable backdoor for an “adversarially-robust” learning algorithm, we can produce a classifier that is indistinguishable from a robust classifier, but where every input has an adversarial example! In this way, the existence of undetectable backdoors represent a significant theoretical roadblock to certifying adversarial robustness. Read More

#cyber