For the last two years, technologists have ominously predicted that AI coding agents will be responsible for a deluge of security vulnerabilities. They were right! Just, not for the reasons they thought.
Within the next few months, coding agents will drastically alter both the practice and the economics of exploit development. Frontier model improvement won’t be a slow burn, but rather a step function. Substantial amounts of high-impact vulnerability research (maybe even most of it) will happen simply by pointing an agent at a source tree and typing “find me zero days”.
I think this outcome is locked in. That we’re starting to see its first clear indications. And that it will profoundly alter information security, and the Internet itself. — Read More
Tag Archives: Cyber
The LiteLLM Supply Chain Attack: A Complete Technical Breakdown Of The AI Ecosystem’s Darkest Hour
On March 24, 2026, the artificial intelligence development community experienced an unprecedented security catastrophe. LiteLLM, an essential open-source Python library used to route and manage API calls across hundreds of large language models, was weaponized in a highly sophisticated supply chain attack. Threat actors known as TeamPCP successfully published two malicious versions of the package (1.82.7 and 1.82.8) directly to the Python Package Index (PyPI).
With LiteLLM averaging 97 million monthly downloads and serving as a foundational dependency for industry titans like Stripe, Netflix, and Google alongside major AI frameworks such as CrewAI, DSPy, and MLflow, the magnitude of this compromise is staggering. — Read More
Beyond Analytics: The Silent Collection of Commercial Intelligence byTikTok and Meta Ad Pixels
TikTok and Meta’s tracking pixels are quietly harvesting personal data, granular checkout interactions, and detailed commerce intelligence from the websites that implement them. The collection is going far beyond what ad attribution requires, creating serious privacy compliance risks and competitive disadvantages for the businesses involved.
Jscrambler conducted a runtime analysis of the ad pixels used by TikTok and Meta on actual websites, revealing that their default behavior requires immediate attention from every organization that employs them. The analysis focused on large companies in the retail, hospitality, and healthcare sectors. However, it’s worth noting that most businesses with an online presence use these tracking pixels on their websites. — Read More
Google unleashes Gemini AI agents on the dark web
Google’s Gemini AI agents are crawling the dark web, sifting through upward of 10 million posts a day to find a handful of threats relevant to a particular organization.
Available now in public preview, the dark web intelligence service built into Google Threat Intelligence uses Gemini’s models to build a profile of a user’s organization. It then scours the dark web to determine the security risks it faces.
Google threat hunters told The Register that their internal tests show it can analyze millions of daily external events with 98 percent accuracy. — Read More
Federal cyber experts called Microsoft’s cloud a “pile of shit,” approved it anyway
In late 2024, the federal government’s cybersecurity evaluators rendered a troubling verdict on one of Microsoft’s biggest cloud computing offerings.
The tech giant’s “lack of proper detailed security documentation” left reviewers with a “lack of confidence in assessing the system’s overall security posture,” according to an internal government report reviewed by ProPublica.
Or, as one member of the team put it: “The package is a pile of shit.”
… Yet, in a highly unusual move that still reverberates across Washington, the Federal Risk and Authorization Management Program, or FedRAMP, authorized the product anyway, bestowing what amounts to the federal government’s cybersecurity seal of approval. FedRAMP’s ruling—which included a kind of “buyer beware” notice to any federal agency considering GCC High—helped Microsoft expand a government business empire worth billions of dollars. — Read More
How I Use LLMs for Security Work
I’ve been using LLM tools like Claude, Cursor, and ChatGPT extensively in my security & engineering work for the past couple years. Not as a replacement for thinking—but they genuinely help me move faster through complex problems. If you’re a security analyst, SOC analyst, threat hunter or engineer who hasn’t found a rhythm with these tools yet, I’ll try to share what’s been working for me with the hope it helps you too. — Read More
How We Hacked McKinsey’s AI Platform
McKinsey & Company — the world’s most prestigious consulting firm — built an internal AI platform called Lilli for its 43,000+ employees. Lilli is a purpose-built system: chat, document analysis, RAG over decades of proprietary research, AI-powered search across 100,000+ internal documents. Launched in 2023, named after the first professional woman hired by the firm in 1945, adopted by over 70% of McKinsey, processing 500,000+ prompts a month.
So we decided to point our autonomous offensive agent at it. No credentials. No insider knowledge. And no human-in-the-loop. Just a domain name and a dream.
Within 2 hours, the agent had full read and write access to the entire production database. — Read More
The Anthropic Shockwave: Why Claude Code Security Just Nuked Cybersecurity Stocks
The Dirty Secret of the SOC
Here is the nuclear option nobody in Silicon Valley wanted to talk about. For years, the cybersecurity industry has been a high stakes gambling ring built on a house of cards. You pay millions for “endpoint protection” and “zero trust” wrappers that essentially act as expensive digital duct tape. But what happens when the tape is no longer needed because the hole in the wall simply ceases to exist.
Anthropic just pressed the button.
On February 20, 2026, the AI industry stopped playing nice. With the launch of Claude Code Security, Anthropic didn’t just release another “assistant.” They released a predator. This isn’t the usual incremental update. This is a paradigm shift where the LLM moves from “writing buggy code” to “fixing bugs that have existed since the Clinton administration.” — Read More
How to scan for vulnerabilities with GitHub Security Lab’s open source AI-powered framework
For the last few months, we’ve been using the GitHub Security Lab Taskflow Agent along with a new set of auditing taskflows that specialize in finding web security vulnerabilities. They also turn out to be very successful at finding high-impact vulnerabilities in open source projects.
As security researchers, we’re used to losing time on possible vulnerabilities that turn out to be unexploitable, but with these new taskflows, we can now spend more of our time on manually verifying the results and sending out reports. Furthermore, the severity of the vulnerabilities that we’re reporting is uniformly high. Many of them are authorization bypasses or information disclosure vulnerabilities that allow one user to login as somebody else or to access the private data of another user.
Using these taskflows, we’ve reported more than 80 vulnerabilities so far. — Read More
AI-Native networks are no longer a 6G promise–MWC 2026 just proved it
AI-native networks have been a recurring talking point at Mobile World Congress for years. What made MWC 2026 in Barcelona different was the evidence. A cascade of announcements from the world’s biggest telecom vendors, chipmakers, and operators didn’t just reiterate the vision for AI-RAN–they delivered field trial results, commercial product launches, open-source toolkits, and a multi-operator coalition committing to build 6G on AI-native foundations.
For enterprise and IT decision-makers, the signal is clear: the architectural shift happening in telecom infrastructure will soon reshape how connectivity is delivered, managed, and monetised. — Read More