LEVERAGING MACHINE LEARNING TO ENHANCE ACOUSTIC EAVESDROPPING ATTACKS

This multi-part series explores how machine learning can enhance eavesdropping on cellular audio using gyroscopes and accelerometers — inertial sensors commonly built into mobile devices to measure motion through Micro-Electro-Mechanical Systems (MEMS) technology. The research was conducted over the summer by one of our interns, Alec K., and a newly hired full-time engineer, August H.

Introduction

Acoustic eavesdropping attacks are a potentially devastating threat to the confidentiality of user information, especially if these attacks are implemented on smartphones, which are now ubiquitous. However, conventional microphone-based attacks are limited on smartphone devices by the fact that the user must consent to the collection of microphone information by applications. Recently, researchers on eavesdropping have taken to performing side-channel attacks that leverage information leaks from a piece of hardware to reconstruct some kind of secret (i.e. the audio we want to listen in on).

Unlike the microphone, which requires explicit user permission to access, sensors like the gyroscope and accelerometer do not require explicit user consent for an application to access their readings on Android. These devices are sensitive to the vibrations caused by sound, and since some Android devices allow sampling these sensors at frequencies up to 500 Hz, it is possible to reconstruct sound using these devices. — Read More

#cyber

New physical attacks are quickly diluting secure enclave defenses from Nvidia, AMD, and Intel

Trusted execution environments, or TEEs, are everywhere—in blockchain architectures, virtually every cloud service, and computing involving AI, finance, and defense contractors. It’s hard to overstate the reliance that entire industries have on three TEEs in particular: Confidential Compute from Nvidia, SEV-SNP from AMD, and SGX and TDX from Intel. All three come with assurances that confidential data and sensitive computing can’t be viewed or altered, even if a server has suffered a complete compromise of the operating kernel.

A trio of novel physical attacks raises new questions about the true security offered by these TEES and the exaggerated promises and misconceptions coming from the big and small players using them.

The most recent attack, released Tuesday, is known as TEE.fail. It defeats the latest TEE protections from all three chipmakers. The low-cost, low-complexity attack works by placing a small piece of hardware between a single physical memory chip and the motherboard slot it plugs into. It also requires the attacker to compromise the operating system kernel. Once this three-minute attack is completed, Confidential Compute, SEV-SNP, and TDX/SDX can no longer be trusted. Unlike the Battering RAM and Wiretap attacks from last month—which worked only against CPUs using DDR4 memory—TEE.fail works against DDR5, allowing them to work against the latest TEEs. — Read More

#strategy

#cyber

Why IP address truncation fails at anonymization

You’ve probably seen it in analytics dashboards, server logs, or privacy documentation: IP addresses with their last octet zeroed out. 192.168.1.42 becomes 192.168.1.0. For IPv6, maybe the last 64 or 80 bits are stripped. This practice is widespread, often promoted as “GDPR-compliant pseudonymization,” and implemented by major analytics platforms, log aggregation services, and web servers worldwide.

There’s just one problem: truncated IP addresses are still personal data under GDPR.

If you’re using IP address truncation thinking it makes data “anonymous” or “non-personal,” you’re creating a false sense of security. European data protection authorities, including the French CNIL, Italian Garante, and Austrian DPA, have repeatedly ruled that truncated IPs remain personal data, especially when combined with other information.

This is a fundamental misunderstanding of what constitutes effective anonymization. — Read More

#cyber

Maximizing the Value of Indicators of Compromise and Reimagining Their Role in Modern Detection

Have we become so focused on TTPs that we’ve dismissed the value at the bottom of the pyramid? This post explores what role IOC’s have in a modern detection program if any, and what the future may look like for them.

You’d be hard-pressed to find a detection engineer who doesn’t know the Pyramid of Pain[1]. It, along with MITRE ATT&CK[2], really solidified the argument for prioritizing behavioral detections. I know I’ve used it to make that exact point many times.

Lately, though, I’ve wondered if we’ve pushed its lesson too far. Have we become so focused on TTPs that we’ve dismissed the value at the bottom of the pyramid? The firehose of indicators is a daily reality, and it’s time our detection strategies caught up by exploring a more pragmatic approach to their effectiveness, their nuances, and how to get the most value out of the time we are required to spend on them. — Read More

#cyber

How I Would Restart My Cybersecurity Career in 2025

Read More

#cyber, #videos

Nation-state hackers deliver malware from “bulletproof” blockchains

Hacking groups—at least one of which works on behalf of the North Korean government—have found a new and inexpensive way to distribute malware from “bulletproof” hosts: stashing them on public cryptocurrency blockchains.

In a Thursday post, members of the Google Threat Intelligence Group said the technique provides the hackers with their own “bulletproof” host, a term that describes cloud platforms that are largely immune from takedowns by law enforcement and pressure from security researchers. More traditionally, these hosts are located in countries without treaties agreeing to enforce criminal laws from the US and other nations. These services often charge hefty sums and cater to criminals spreading malware or peddling child sexual abuse material and wares sold in crime-based flea markets. — Read More

#blockchain, #cyber

Introducing CodeMender: an AI agent for code security

… Software vulnerabilities are notoriously difficult and time-consuming for developers to find and fix, even with traditional, automated methods like fuzzing. Our AI-based efforts like Big Sleep and OSS-Fuzz have demonstrated AI’s ability to find new zero-day vulnerabilities in well-tested software. As we achieve more breakthroughs in AI-powered vulnerability discovery, it will become increasingly difficult for humans alone to keep up.

CodeMender helps solve this problem by taking a comprehensive approach to code security that’s both reactive, instantly patching new vulnerabilities, and proactive, rewriting and securing existing code and eliminating entire classes of vulnerabilities in the process. Over the past six months that we’ve been building CodeMender, we have already upstreamed 72 security fixes to open source projects, including some as large as 4.5 million lines of code.

By automatically creating and applying high-quality security patches, CodeMender’s AI-powered agent helps developers and maintainers focus on what they do best — building good software. — Read More

#cyber

Building AI for cyber defenders

AI models are now useful for cybersecurity tasks in practice, not just theory. As research and experience demonstrated the utility of frontier AI as a tool for cyber attackers, we invested in improving Claude’s ability to help defenders detect, analyze, and remediate vulnerabilities in code and deployed systems. This work allowed Claude Sonnet 4.5 to match or eclipse Opus 4.1, our frontier model released only two months prior, in discovering code vulnerabilities and other cyber skills. Adopting and experimenting with AI will be key for defenders to keep pace.

We believe we are now at an inflection point for AI’s impact on cybersecurity. — Read More

#cyber

Building AI for cyber defenders

AI models are now useful for cybersecurity tasks in practice, not just theory. As research and experience demonstrated the utility of frontier AI as a tool for cyber attackers, we invested in improving Claude’s ability to help defenders detect, analyze, and remediate vulnerabilities in code and deployed systems. This work allowed Claude Sonnet 4.5 to match or eclipse Opus 4.1, our frontier model released only two months prior, in discovering code vulnerabilities and other cyber skills. Adopting and experimenting with AI will be key for defenders to keep pace.

We believe we are now at an inflection point for AI’s impact on cybersecurity.

For several years, our team has carefully tracked the cybersecurity-relevant capabilities of AI models. Initially, we found models to be not particularly powerful for advanced and meaningful capabilities. However, over the past year or so, we’ve noticed a shift.  — Read More

#cyber

How Hackers Hack Websites

Read More

#cyber, #videos