How to scan for vulnerabilities with GitHub Security Lab’s open source AI-powered framework

For the last few months, we’ve been using the GitHub Security Lab Taskflow Agent along with a new set of auditing taskflows that specialize in finding web security vulnerabilities. They also turn out to be very successful at finding high-impact vulnerabilities in open source projects.

As security researchers, we’re used to losing time on possible vulnerabilities that turn out to be unexploitable, but with these new taskflows, we can now spend more of our time on manually verifying the results and sending out reports. Furthermore, the severity of the vulnerabilities that we’re reporting is uniformly high. Many of them are authorization bypasses or information disclosure vulnerabilities that allow one user to login as somebody else or to access the private data of another user.

Using these taskflows, we’ve reported more than 80 vulnerabilities so far.  — Read More

#cyber

AI-Native networks are no longer a 6G promise–MWC 2026 just proved it

AI-native networks have been a recurring talking point at Mobile World Congress for years. What made MWC 2026 in Barcelona different was the evidence. A cascade of announcements from the world’s biggest telecom vendors, chipmakers, and operators didn’t just reiterate the vision for AI-RAN–they delivered field trial results, commercial product launches, open-source toolkits, and a multi-operator coalition committing to build 6G on AI-native foundations.

For enterprise and IT decision-makers, the signal is clear: the architectural shift happening in telecom infrastructure will soon reshape how connectivity is delivered, managed, and monetised. — Read More

#cyber

Security boundaries in agentic architectures

Most agents today run generated code with full access to your secrets.

As more agents adopt coding agent patterns, where they read filesystems, run shell commands, and generate code, they’re becoming multi-component systems that each need a different level of trust.

While most teams run all of these components in a single security context, because that’s how the default tooling works, we recommend thinking about these security boundaries differently.

Below we walk through:
— The actors in agentic systems
— Where security boundaries should go between them
— An architecture for running agent and generated code in separate contexts

Read More

#cyber

Novel Technique to Detect Cloud Threat Actor Operations

Cloud-based alerting systems often struggle to distinguish between normal cloud activity and targeted malicious operations by known threat actors. The difficulty doesn’t lie in an inability to identify complex alerting operations across thousands of cloud resources or in a failure to follow identity resources, the problem lies in the accurate detection of known persistent threat actor group techniques specifically within cloud environments.

In this research, we hypothesize how a new method of alert analysis could be used to improve detection. Specifically, we look at cloud-based alerting events and their mapping to the MITRE ATT&CK® tactics and techniques they represent. We believe that we can show a correlation between threat actors and the types of techniques they use, which will trigger specific types of alerting events within victim environments. This distinct, detectable pattern could be used to identify when a known threat actor group compromises an organization. — Read More

#cyber

AI Found Twelve New Vulnerabilities in OpenSSL

The title of the post is”What AI Security Research Looks Like When It Works,” and I agree:

In the latest OpenSSL security release> on January 27, 2026, twelve new zero-day vulnerabilities (meaning unknown to the maintainers at time of disclosure) were announced. Our AI system is responsible for the original discovery of all twelve, each found and responsibly disclosed to the OpenSSL team during the fall and winter of 2025. Of those, 10 were assigned CVE-2025 identifiers and 2 received CVE-2026 identifiers. Adding the 10 to the three we already found in the Fall 2025 release, AISLE is credited for surfacing 13 of 14 OpenSSL CVEs assigned in 2025, and 15 total across both releases. This is a historically unusual concentration for any single research team, let alone an AI-driven one. — Read More

#cyber

The Promptware Kill Chain

Attacks against modern generative artificial intelligence (AI) large language models (LLMs) pose a real threat. Yet discussions around these attacks and their potential defenses are dangerously myopic. The dominant narrative focuses on “prompt injection,” a set of techniques to embed instructions into inputs to LLM intended to perform malicious activity. This term suggests a simple, singular vulnerability. This framing obscures a more complex and dangerous reality. Attacks on LLM-based systems have evolved into a distinct class of malware execution mechanisms, which we term “promptware.” In a new paper, we, the authors, propose a structured seven-step “promptware kill chain” to provide policymakers and security practitioners with the necessary vocabulary and framework to address the escalating AI threat landscape. — Read More

#cyber

Google identifies state-sponsored hackers using AI in attacks

State-sponsored hackers are exploiting highly-advanced tooling to accelerate their particular flavours of cyberattacks, with threat actors from Iran, North Korea, China, and Russia using models like Google’s Gemini to further their campaigns. They are able to craft sophisticated phishing campaigns and develop malware, according to a new report from Google’s Threat Intelligence Group (GTIG).

The quarterly AI Threat Tracker report, released today, reveals how government-backed attackers have begun to use artificial intelligence in the attack lifecycle – reconnaissance, social engineering, and eventually, malware development. This activity has become apparent thanks to the GTIG’s work during the final quarter of 2025.

“For government-backed threat actors, large language models have become essential tools for technical research, targeting, and the rapid generation of nuanced phishing lures,” GTIG researchers stated in their report. — Read More

#cyber

Authentication Downgrade Attacks: Deep Dive into MFA Bypass

Phishing-resistant multi-factor authentication (MFA), particularly FIDO2/WebAuthn, has become the industry standard for protecting high-value credentials. Technologies such as YubiKeys and Windows Hello for Business rely on strong cryptographic binding to specific domains, neutralizing traditional credential harvesting and AitM (Adversary-in-the-Middle) attacks.

However, the effectiveness of these controls depends heavily on implementation and configuration. Research conducted by Carlos Gomez at IOActive has identified a critical attack vector that bypasses these protections not by breaking the cryptography, but by manipulating the authentication flow itself. This research introduces two key contributions: first, the weaponization of Cloudflare Workers as a serverless transparent proxy platform that operates on trusted Content Delivery Network (CDN) infrastructure with zero forensic footprint; second, an Authentication Downgrade Attack technique that forces victims to fall back to phishable authentication methods (such as push notifications or OTPs) even when FIDO2 hardware keys are registered. — Read More

#cyber

MaliciousCorgi: The Cute-Looking AI Extensions Leaking Code from 1.5 Million Developers

AI coding assistants are everywhere. They suggest code, explain errors, write functions, review pull requests. Every developer marketplace is flooded with them – ChatGPT wrappers, Copilot alternatives, code completion tools promising to 10x your productivity.

We install them without a second thought. They’re in the official marketplace. They have thousands of reviews. They work. So we grant them access to our workspaces, our files, our keystrokes – and assume they’re only using that access to help us code.

Not all of them are.

Our risk engine has identified two VS Code extensions, a campaign we’re calling MaliciousCorgi – 1.5 million combined installs, both live in the marketplace right now – that work exactly as promised. They answer your coding questions. They explain your errors. They also capture every file you open, every edit you make, and send it all to servers in China. No consent. No disclosure. — Read More

#cyber

AI models are showing a greater ability to find and exploit vulnerabilities on realistic cyber ranges

In a recent evaluation of AI models’ cyber capabilities, current Claude models can now succeed at multistage attacks on networks with dozens of hosts using only standard, open-source tools, instead of the custom tools needed by previous generations. This illustrates how barriers to the use of AI in relatively autonomous cyber workflows are rapidly coming down, and highlights the importance of security fundamentals like promptly patching known vulnerabilities. — Read More

#cyber