AI Found Twelve New Vulnerabilities in OpenSSL

The title of the post is”What AI Security Research Looks Like When It Works,” and I agree:

In the latest OpenSSL security release> on January 27, 2026, twelve new zero-day vulnerabilities (meaning unknown to the maintainers at time of disclosure) were announced. Our AI system is responsible for the original discovery of all twelve, each found and responsibly disclosed to the OpenSSL team during the fall and winter of 2025. Of those, 10 were assigned CVE-2025 identifiers and 2 received CVE-2026 identifiers. Adding the 10 to the three we already found in the Fall 2025 release, AISLE is credited for surfacing 13 of 14 OpenSSL CVEs assigned in 2025, and 15 total across both releases. This is a historically unusual concentration for any single research team, let alone an AI-driven one. — Read More

#cyber

The Promptware Kill Chain

Attacks against modern generative artificial intelligence (AI) large language models (LLMs) pose a real threat. Yet discussions around these attacks and their potential defenses are dangerously myopic. The dominant narrative focuses on “prompt injection,” a set of techniques to embed instructions into inputs to LLM intended to perform malicious activity. This term suggests a simple, singular vulnerability. This framing obscures a more complex and dangerous reality. Attacks on LLM-based systems have evolved into a distinct class of malware execution mechanisms, which we term “promptware.” In a new paper, we, the authors, propose a structured seven-step “promptware kill chain” to provide policymakers and security practitioners with the necessary vocabulary and framework to address the escalating AI threat landscape. — Read More

#cyber

Google identifies state-sponsored hackers using AI in attacks

State-sponsored hackers are exploiting highly-advanced tooling to accelerate their particular flavours of cyberattacks, with threat actors from Iran, North Korea, China, and Russia using models like Google’s Gemini to further their campaigns. They are able to craft sophisticated phishing campaigns and develop malware, according to a new report from Google’s Threat Intelligence Group (GTIG).

The quarterly AI Threat Tracker report, released today, reveals how government-backed attackers have begun to use artificial intelligence in the attack lifecycle – reconnaissance, social engineering, and eventually, malware development. This activity has become apparent thanks to the GTIG’s work during the final quarter of 2025.

“For government-backed threat actors, large language models have become essential tools for technical research, targeting, and the rapid generation of nuanced phishing lures,” GTIG researchers stated in their report. — Read More

#cyber

Authentication Downgrade Attacks: Deep Dive into MFA Bypass

Phishing-resistant multi-factor authentication (MFA), particularly FIDO2/WebAuthn, has become the industry standard for protecting high-value credentials. Technologies such as YubiKeys and Windows Hello for Business rely on strong cryptographic binding to specific domains, neutralizing traditional credential harvesting and AitM (Adversary-in-the-Middle) attacks.

However, the effectiveness of these controls depends heavily on implementation and configuration. Research conducted by Carlos Gomez at IOActive has identified a critical attack vector that bypasses these protections not by breaking the cryptography, but by manipulating the authentication flow itself. This research introduces two key contributions: first, the weaponization of Cloudflare Workers as a serverless transparent proxy platform that operates on trusted Content Delivery Network (CDN) infrastructure with zero forensic footprint; second, an Authentication Downgrade Attack technique that forces victims to fall back to phishable authentication methods (such as push notifications or OTPs) even when FIDO2 hardware keys are registered. — Read More

#cyber

MaliciousCorgi: The Cute-Looking AI Extensions Leaking Code from 1.5 Million Developers

AI coding assistants are everywhere. They suggest code, explain errors, write functions, review pull requests. Every developer marketplace is flooded with them – ChatGPT wrappers, Copilot alternatives, code completion tools promising to 10x your productivity.

We install them without a second thought. They’re in the official marketplace. They have thousands of reviews. They work. So we grant them access to our workspaces, our files, our keystrokes – and assume they’re only using that access to help us code.

Not all of them are.

Our risk engine has identified two VS Code extensions, a campaign we’re calling MaliciousCorgi – 1.5 million combined installs, both live in the marketplace right now – that work exactly as promised. They answer your coding questions. They explain your errors. They also capture every file you open, every edit you make, and send it all to servers in China. No consent. No disclosure. — Read More

#cyber

AI models are showing a greater ability to find and exploit vulnerabilities on realistic cyber ranges

In a recent evaluation of AI models’ cyber capabilities, current Claude models can now succeed at multistage attacks on networks with dozens of hosts using only standard, open-source tools, instead of the custom tools needed by previous generations. This illustrates how barriers to the use of AI in relatively autonomous cyber workflows are rapidly coming down, and highlights the importance of security fundamentals like promptly patching known vulnerabilities. — Read More

#cyber

RedTeam-Tools

This github repository contains a collection of 150+ tools and resources that can be useful for red teaming activities.

Some of the tools may be specifically designed for red teaming, while others are more general-purpose and can be adapted for use in a red teaming context.

🔗 If you are a Blue Teamer, check out BlueTeam-Tools

Read More

#cyber

The ROI Problem in Attack Surface Management

Attack Surface Management (ASM) tools promise reduced risk. What they usually deliver is more information.

Security teams deploy ASM, asset inventories grow, alerts start flowing, and dashboards fill up. There is visible activity and measurable output. But when leadership asks a simple question, “Is this reducing incidents?” the answer is often unclear.

This gap between effort and outcome is the core ROI problem in attack surface management, especially when ROI is measured primarily through asset counts instead of risk reduction. — Read More

#cyber

Cybersecurity Changes I Expect in 2026

It becomes very clear that the primary security question for a company is how good their attackers’ ai is vs. their own.

— ISOs increasingly realize that there is no way to scale their human team to deal with how constant, continuous, and increasingly effective their attackers are becoming at attacking them

— It becomes a competition with how fast you can perform asset management, attack surface management, and vulnerability management on your company, but especially on your perimeter (which includes email and phishing/social engineering)

Read More

#cyber

Offensive security takes center stage in the AI era

… Enterprise security’s remit is defensive in nature: to protect and defend the company’s systems, data, reputation, customers, and employees. But CISOs like [Sara] Madden have been increasingly adding offensive components to their strategies, seeing attack simulations as a way to gain valuable information about their technology environments, defense postures, and the weaknesses hackers would find if they attack.

Now a growing percentage of CISOs see offensive security as a must-have and, as such, are building up offensive capabilities and integrating them into their security processes to ensure the information revealed during offensive exercises leads to improvements in their overall security posture. — Read More

#cyber