The web has always had to adapt to new standards. It learned to speak to web browsers, and then it learned to speak to search engines. Now, it needs to speak to AI agents.
Today, we are excited to introduce isitagentready.com — a new tool to help site owners understand how they can make their sites optimized for agents, from guiding agents on how to authenticate, to controlling what content agents can see, the format they receive it in, and how they pay for it. We are also introducing a new dataset to Cloudflare Radar that tracks the overall adoption of each agent standard across the Internet. — Read More
Daily Archives: April 21, 2026
Agents are starting to operate real systems — who’s actually in control?
AI agents have moved quickly from copilots to economic actors faster than the infrastructure around them.
While agents now execute tasks and transact, they still lack standardized ways to prove who they are, what they’re authorized to do, and how they get paid across environments. Identity doesn’t travel, payments aren’t yet programmable by default, and coordination happens in silos.
Blockchains address this at the infrastructure layer. Public ledgers give every transaction a receipt that anyone can audit. Wallets give agents portable identity. Stablecoins are an alternative settlement layer. These aren’t future primitives. They work today, and they can help agents operate permissionlessly as real economic actors. — Read More
The AI engineering stack we built internally — on the platform we ship
In the last 30 days, 93% of Cloudflare’s R&D organization used AI coding tools powered by infrastructure we built on our own platform.
Eleven months ago, we undertook a major project: to truly integrate AI into our engineering stack. We needed to build the internal MCP servers, access layer, and AI tooling necessary for agents to be useful at Cloudflare. We pulled together engineers from across the company to form a tiger team called iMARS (Internal MCP Agent/Server Rollout Squad). The sustained work landed with the Dev Productivity team, who also own much of our internal tooling including CI/CD, build systems, and automation..
… MCP servers were the starting point, but the team quickly realized we needed to go further: rethink how standards are codified, how code gets reviewed, how engineers onboard, and how changes propagate across thousands of repos..
This post dives deep into what that looked like over the past eleven months and where we ended up. — Read More
The Boy That Cried Mythos: Verification is Collapsing Trust in Anthropic
I’ve been getting more and more curious about the risk from Anthropic’s Claude Mythos Preview. So I pulled the system card, a whoppingly inefficient 244-page document that devotes just seven pages to the claim that the model is too dangerous to release. In fact, the 23MB of PDF I had to download was 20MB of wasted time and space. Compressing the PDF to 3MB meant I lost exactly nothing.
Foreshadowing, I guess.
Spoiler alert: the crucial seven pages out of 244 do not contain the word “fuzzer” once. That’s like a seven page vacation brochure for Hawaii that leaves out the word beaches.
Also, the crucial seven pages out of 244 do not contain the expected acronyms CVSS, CWE or CVE, they do not have comparison baseline, an independent reproduction, or the word “thousands.” I’ll get back to all of that in a minute. — Read More
Benchmarking Self-Hosted LLMs for Offensive Security
LLM Agents can Autonomously Exploit One-day Vulnerabilities demonstrated that frontier models can exploit known vulnerabilities when given appropriate tooling. And if you have used Claude Code, there is no doubt you’ve either used it or have seen how well it can reverse engineer.
However, Benchmarking Practices in LLM-driven Offensive Security surveyed multiple papers in this space and found that only around 25% evaluated local or small models. The majority relied on GPT-4 or similar cloud-hosted frontier models, often with CTF-style challenges where hints were embedded in the prompt.
In this work, I defined a set of simple challenges to give a locally hosted model a single HTTP request tool that pointed to Juice Shop. The amount of guidance varies by challenge, and some provide only an endpoint and a goal. Whereas others include step-by-step instructions, but in all cases, the model must craft and execute the actual payloads. As it goes on, there are caveats that are added and anecdotal notes. — Read More
Best practices for building agentic systems
Agentic AI has emerged as the software industry’s latest shiny thing. Beyond smarter chatbots, AI agents operate with increasing autonomy, making them poised to drive efficiency gains across enterprises.
“Agentic refers to AI systems that can take actions on behalf of users, not just generate text or answer questions,” says Andrew McNamara, director of applied machine learning at Shopify. Agentic systems run continuously until a task is complete, he adds, citing Shopify’s Sidekick, a proactive agent for merchants.
Development of agentic AI now spans many business domains. According to Anthropic, a provider of large language models (LLMs), AI agents are most commonly deployed in software engineering, accounting for roughly half of use cases, followed by back-office automation, marketing, sales, finance, and data analysis. — Read More
Quantum Computers Are Not a Threat to 128-bit Symmetric Keys
The advancing threat of cryptographically-relevant quantum computers has made it urgent to replace currently-deployed asymmetric cryptography primitives—key exchange (ECDH) and digital signatures (RSA, ECDSA, EdDSA)—which are vulnerable to Shor’s quantum algorithm. It does not, however, impact existing symmetric cryptography algorithms (AES, SHA-2, SHA-3) or their key sizes.
There’s a common misconception that quantum computers will “halve” the security of symmetric keys, requiring 256-bit keys for 128 bits of security. That is not an accurate interpretation of the speedup offered by quantum algorithms, it’s not reflected in any compliance mandate, and risks diverting energy and attention from actually necessary post-quantum transition work. The misconception is usually based on a misunderstanding of the applicability of a different quantum algorithm, Grover’s.
AES-128 is safe against quantum computers. SHA-256 is safe against quantum computers. No symmetric key sizes have to change as part of the post-quantum transition. This is a near-consensus opinion amongst experts and standardization bodies and it needs to propagate to the rest of the IT community. — Read More