Poison in the Well

Securing the Shared Resources of Machine Learning

Progress in machine learning depends on trust. Researchers often place their advances in a public well of shared resources, and developers draw on those to save enormous amounts of time and money. Coders use the code of others, harnessing common tools rather than reinventing the wheel. Engineers use systems developed by others as a basis for their own creations. Data scientists draw on large public datasets to train machines to carry out routine tasks, such as image recognition, autonomous driving, and text analysis. Machine learning has accelerated so quickly and proliferated so widely largely because of this shared well of tools and data.

But the trust that so many place in these common resources is a security weakness. Poison in this well can spread, affecting the products that draw from it. Right now, it is hard to verify that the well of machine learning is free from malicious interference. In fact, there are good reasons to be worried. Attackers can poison the well’s three main resources—machine learning tools, pretrained machine learning models, and datasets for training—in ways that are extremely difficult to detect. Read More

#adversarial, #cyber

A Survey of Deep Learning Methods for Cyber Security

This survey paper describes a literature review of deep learning (DL) methods for cybersecurity applications. A short tutorial-style description of each DL method is provided, including deep autoencoders, restricted Boltzmann machines, recurrent neural networks, generative adversarial networks, and several others. Then we discuss how each of the DL methods is used for security applications. We cover a broad array of attack types including malware, spam, insider threats, network intrusions, false data injection, and malicious domain names used by botnets. Read More

Review: Deep Learning techniques for Cyber Security

#cyber

Why AI is the key to robust anti-abuse defenses

This series of posts explains why artificial intelligence (AI) is the key to building anti-abuse defenses that keep up with user expectations and combat increasingly sophisticated attacks. The first post of the series provides a concise overview of how to harness AI to build robust anti-abuse protections. The remaining posts delve into the top 10 anti-abuse specific challenges encountered while applying AI to abuse fighting, and how to overcome them. Following the natural progression of building and launching an AI-based defense system, the second post covers the challenges related to training, the third post delves into classification issues and the fourth and final post looks at how attackers go about attacking AI-based defenses. Read More

#cyber

Cyberspace Is Neither Just an Intelligence Contest, nor a Domain of Military Conflict; SolarWinds Shows Us Why It’s Both

Operations in cyberspace—at least those perpetrated by nation-state actors and their proxies—reflect the geopolitical calculations of the actors who carry them out. Strategic interactions between rivals in cyberspace have been argued by some, like Joshua Rovner or Jon Lindsay, to reflect an intelligence contest. Others, like Jason Healey and Robert Jervis, have suggested that cyberspace is largely a domain of warfare or conflict. The contours of this debate as applied to the SolarWinds campaign have been outlined recently—Melissa Griffith shows how cyberspace is sometimes an intelligence contest, and other times a domain of conflict, depending on the strategic approaches and priorities of particular actors at a given moment in time.

Therefore, rather than focusing on the binary issue of whether a warfare versus intelligence framework is more applicable to cyberspace, the fact that activity in cyberspace takes on both of these characteristics at different times raises interesting questions about how these dimensions relate to one another at the operational level. Read More

#cyber, #dod, #ic

Thousands of Tor exit nodes attacked cryptocurrency users over the past year

For more than 16 months, a threat actor has been seen adding malicious servers to the Tor network in order to intercept traffic and perform SSL stripping attacks on users accessing cryptocurrency-related sites.

The attacks, which began in January 2020, consisted of adding servers to the Tor network and marking them as “exit relays,” which are the servers through which traffic leaves the Tor network to re-enter the public internet after being anonymized. Read More

#surveillance, #blockchain, #cyber

How we fought Search spam on Google in 2020

Webspam Report 2020

Google Search is a powerful tool to help you find useful information on the open web. Unfortunately, not all web pages are created with good intent. Many of them are explicitly created to deceive people, and that is something we fight against every day. To ensure your safety and protect your search experience against disruptive content and malicious behaviors, Search has invested in many innovations in 2020.

The result is that very little spam actually makes it into the top results anyone sees for a search, thanks to our automated systems that are aided by AI. We estimated that these automated systems help keep more than 99% of visits from Search completely spam-free. Read More

#cyber

Microsoft Releases Open Source ‘Counterfit’ Tool for Attacking AI Systems

Microsoft on Monday announced the release of Counterfit as an open source project on GitHub, permitting organizations to test the security of their artificial intelligence (AI) software solutions by attacking them.

Counterfit is a command-line interface tool for conducting automated attacks at scale on AI systems. Microsoft built it as part of its own “red team” attack testing efforts. Organizations can use this tool to attempt to try to “evade and steal AI models,” Microsoft indicated. It has a logging capability that provides “telemetry” information, which can be used to understand AI model failures. Read More

#cyber

Deep Learning-Based Autonomous Driving Systems: A Survey of Attacks and Defenses

The rapid development of artificial intelligence,especially deep learning technology, has advanced autonomous driving systems (ADSs) by providing precise control decisions to counterpart almost any driving event, spanning from anti-fatigue safe driving to intelligent route planning. However, ADSs are still plagued by increasing threats from different attacks, which could be categorized into physical attacks, cyber attacks and learning-based adversarial attacks. Inevitably, the safety and security of deep learning-based autonomous driving are severely challenged by these attacks, from which the countermeasures should be analyzed and studied comprehensively to mitigate all potential risks. This survey provides a thorough analysis of different attacks that may jeopardize ADSs, as well as the corresponding state-of-the-art defense mechanisms. The analysis is unrolled by taking an in-depth overview of each step in the ADS workflow,covering adversarial attacks for various deep learning models and attacks in both physical and cyber context. Furthermore, some promising research directions are suggested in order to improve deep learning-based autonomous driving safety, including model robustness training, model testing and verification, and anomaly detection based on cloud/edge servers. Read More

#adversarial, #cyber

Hackers Used to Be Humans. Soon, AIs Will Hack Humanity

Like crafty genies, AIs will grant our wishes, and then hack them, exploiting our social, political, and economic systems like never before.

If you don’t have enough to worry about already, consider a world where AIs are hackers.

Hacking is as old as humanity. We are creative problem solvers. We exploit loopholes, manipulate systems, and strive for more influence, power, and wealth. To date, hacking has exclusively been a human activity. Not for long.

As I lay out in a report I just published, artificial intelligence will eventually find vulnerabilities in all sorts of social, economic, and political systems, and then exploit them at unprecedented speed, scale, and scope. After hacking humanity, AI systems will then hack other AI systems, and humans will be little more than collateral damage. Read More

#cyber

Explained: How An ML Model From AWS Detects Abnormal Machine Behaviour

Every machine is subject to wear and tear and can lead to a loss of efficiency if left unattended. The health of equipment is key to driving operational efficiencies at shop floors. To that end, Amazon Web Services (AWS) recently announced the general availability of Lookout for Equipment. The new service is equipped with machine learning models from AWS. Amazon Lookout for Equipment empowers industrial customers to leverage machine learning to optimise their equipment sensors to carry out large-scale predictive maintenance. 

Customers can use the service to precisely identify equipment anomalies, diagnose problems quickly, minimise false warnings, and prevent costly downtime by taking action before system failure. Read More

#cyber