Academia and the “AI Brain Drain”

In 2025, Google, Amazon, Microsoft and Meta collectively spent US$380 billion on building artificial-intelligence tools. That number is expected to surge still higher this year, to $650 billion, to fund the building of physical infrastructure, such as data centers (see go.nature.com/3lzf79q). Moreover, these firms are spending lavishly on one particular segment: top technical talent.

Meta reportedly offered a single AI researcher, who had cofounded a start-up firm focused on training AI agents to use computers, a compensation package of $250 million over four years (see go.nature.com/4qznsq1). Technology firms are also spending billions on “reverse-acquihires”—poaching the star staff members of start-ups without acquiring the companies themselves. Eyeing these generous payouts, technical experts earning more modest salaries might well reconsider their career choices.

Academia is already losing out.  — Read More

#strategy

He Wrote 200 Lines of Code and Walked Away (What happened Next will blow your Mind)

Let me tell you a story that’s going to mess with your head a little bit.

A developer named Liyuanhao sat down and wrote 200 lines of code in Rust.

That’s it. Just a tiny, bare-bones script.

But what happened after he hit run is the kind of thing you have to read twice just to make sure you aren’t imagining things.

He named the project yoyo — a self-evolving coding agent. And then, and this is the part that genuinely gets me, he stepped away entirely. He took his hands off the keyboard.

He gave it one single instruction: evolve until you rival Claude Code. Then, he just sat back and watched. — Read More

#devops

Institutional AI vs Individual AI

AI just made every individual 10x more productive.

No company became 10x more valuable as a result.

Where did the productivity go?

This isn’t the first time this has happened.

In the 1890s, electricity promised enormous productivity gains.

Textile mills in New England, built to harness the rotational power of steam engines, quickly installed faster electric motors in their place.

But for thirty years, electrified mills saw almost no increase in output. The technology was far superior. But the organization was not.

It wasn’t until the 1920s, when factories completely redesigned the mills once again, with assembly lines, individual motors within every piece of equipment, and workers and machines executing drastically different jobs, that electrification produced meaningful returns. — Read More

#strategy

How I Use LLMs for Security Work

I’ve been using LLM tools like Claude, Cursor, and ChatGPT extensively in my security & engineering work for the past couple years. Not as a replacement for thinking—but they genuinely help me move faster through complex problems. If you’re a security analyst, SOC analyst, threat hunter or engineer who hasn’t found a rhythm with these tools yet, I’ll try to share what’s been working for me with the hope it helps you too. — Read More

#cyber

How We Hacked McKinsey’s AI Platform

McKinsey & Company — the world’s most prestigious consulting firm — built an internal AI platform called Lilli for its 43,000+ employees. Lilli is a purpose-built system: chat, document analysis, RAG over decades of proprietary research, AI-powered search across 100,000+ internal documents. Launched in 2023, named after the first professional woman hired by the firm in 1945, adopted by over 70% of McKinsey, processing 500,000+ prompts a month.

So we decided to point our autonomous offensive agent at it. No credentials. No insider knowledge. And no human-in-the-loop. Just a domain name and a dream.

Within 2 hours, the agent had full read and write access to the entire production database.Read More

#cyber

The most important question nobody’s asking about AI

By now, I’m sure you’ve heard that the Department of War has declared Anthropic a supply chain risk, because Anthropic refused to remove redlines around the use of their models for mass surveillance and for autonomous weapons.

Honestly I think this situation is a warning shot. Right now, LLMs are probably not being used in mission critical ways. But within 20 years, 99% of the workforce in the military, the government, and the private sector will be AIs. This includes the soldiers (by which I mean the robot armies), the superhumanly intelligent advisors and engineers, the police, you name it.

Our future civilization will run on AI labor. And as much as the government’s actions here piss me off, in a way I’m glad this episode happened – because it gives us the opportunity to think through some extremely important questions about who this future workforce will be accountable and aligned to, and who gets to determine that. — Read More

#dod

How A Regular Person Can Utilize AI Agents

Let’s do this again, redux! I’ll explain how to use AI agents for easy language learning, to create an easier version of my morning briefing, and finally, a far easier version of my briefing transcription -> summary -> action pipeline. In the process, my goal is to help readers remix the general principles for their own (mostly safe) agents.

My last piece about AI agents was my most popular and widely shared article to date. Usually, one writes a “Part 1” that’s easier and a “Part 2” that’s more complex. This is the exact opposite.

… So, in this revisit, I have these goals:

— Explain the general principles of creating agents (more slowly)
— Use methods that are more accessible to non-technical users.
— Give a framework for remixing these methods for readers’ own ideas/agents.

Ironically, this piece took longer than my last one. Instead of just sharing my workflows, this piece is designed to let you use these agents with step-by-step instructions, from scratch, and have them adapted to you (not me). — Read More

#devops

The SaaSpocalypse: AI Agents, Vibe Coding, and the Changing Economics of SaaS

Over the past few months, a new phrase has been circulating across tech, venture capital, and public markets:

“The SaaSpocalypse.”

The narrative is straightforward, and a bit alarming for SaaS operators. What’s real and what’s clickbait?

We know this. AI agents are improving rapidly. Coding tools can generate entire applications. AI can automate workflows once performed inside SaaS products.

If software can now be generated on demand, the logic goes: why pay recurring subscriptions for SaaS at all? — Read More

#strategy

Andrej Karpathy’s new open source ‘autoresearch’ lets you run hundreds of AI experiments a night — with revolutionary implications

Over the weekend, Andrej Karpathy—the influential former Tesla AI lead and co-founder and former member of OpenAI who coined the term “vibe coding”— posted on X about his new open source project, autoresearch.

It wasn’t a finished model or a massive corporate product: it was by his own admission a simple, 630-line script made available on Github under a permissive, enterprise-friendly MIT License. But the ambition was massive: automating the scientific method with AI agents while us humans sleep. — Read More

#devops

The Anthropic Shockwave: Why Claude Code Security Just Nuked Cybersecurity Stocks

The Dirty Secret of the SOC

Here is the nuclear option nobody in Silicon Valley wanted to talk about. For years, the cybersecurity industry has been a high stakes gambling ring built on a house of cards. You pay millions for “endpoint protection” and “zero trust” wrappers that essentially act as expensive digital duct tape. But what happens when the tape is no longer needed because the hole in the wall simply ceases to exist.

Anthropic just pressed the button.

On February 20, 2026, the AI industry stopped playing nice. With the launch of Claude Code Security, Anthropic didn’t just release another “assistant.” They released a predator. This isn’t the usual incremental update. This is a paradigm shift where the LLM moves from “writing buggy code” to “fixing bugs that have existed since the Clinton administration.” — Read More

#cyber