New physical attacks are quickly diluting secure enclave defenses from Nvidia, AMD, and Intel

Trusted execution environments, or TEEs, are everywhere—in blockchain architectures, virtually every cloud service, and computing involving AI, finance, and defense contractors. It’s hard to overstate the reliance that entire industries have on three TEEs in particular: Confidential Compute from Nvidia, SEV-SNP from AMD, and SGX and TDX from Intel. All three come with assurances that confidential data and sensitive computing can’t be viewed or altered, even if a server has suffered a complete compromise of the operating kernel.

A trio of novel physical attacks raises new questions about the true security offered by these TEES and the exaggerated promises and misconceptions coming from the big and small players using them.

The most recent attack, released Tuesday, is known as TEE.fail. It defeats the latest TEE protections from all three chipmakers. The low-cost, low-complexity attack works by placing a small piece of hardware between a single physical memory chip and the motherboard slot it plugs into. It also requires the attacker to compromise the operating system kernel. Once this three-minute attack is completed, Confidential Compute, SEV-SNP, and TDX/SDX can no longer be trusted. Unlike the Battering RAM and Wiretap attacks from last month—which worked only against CPUs using DDR4 memory—TEE.fail works against DDR5, allowing them to work against the latest TEEs. — Read More

#strategy

#cyber

Through the Looking Glass: Stephen Klein’s Quest to Make AI Think Before It Speaks

“Agentic AI is 100% Non-Sense Designed To Scare You Into Spending Money on Consulting.”

That was the hook of a LinkedIn post designed to ruffle feathers in the AI world. It was bold, direct, and very Stephen Klein.

… While most of Silicon Valley is busy building AI companies designed to automate and replace jobs, all in the pursuit of profit, Stephen is purposely, loudly, going against the grain.

He is the founder of Curiouser.ai, a startup building the world’s first strategic AI coach, Alice, designed not to answer your questions, but to ask them.

And not just any questions, thought-provoking, Socratic, destabilizing questions.

Welcome to Alice in Wonderland. And Stephen, like a modern-day Lewis Carroll, is inviting us to question everything. — Read More

#strategy

Systems Thinking for Scaling Responsible Multi-Agent Architectures

Nimisha Asthagiri explains the critical need for responsible AI in complex multi-agent systems. She shares practical techniques for engineering leaders and architects, applying systems thinking and Causal Flow Diagrams. She shows how these methods help predict and mitigate the unintended consequences and structural risks inherent in autonomous, learning agents, using a scheduler agent example. — Read More

#strategy

Technological Optimism and Appropriate Fear

I remember being a child and after the lights turned out I would look around my bedroom and I would see shapes in the darkness and I would become afraid – afraid these shapes were creatures I did not understand that wanted to do me harm. And so I’d turn my light on. And when I turned the light on I would be relieved because the creatures turned out to be a pile of clothes on a chair, or a bookshelf, or a lampshade.

Now, in the year of 2025, we are the child from that story and the room is our planet. But when we turn the light on we find ourselves gazing upon true creatures, in the form of the powerful and somewhat unpredictable AI systems of today and those that are to come. And there are many people who desperately want to believe that these creatures are nothing but a pile of clothes on a chair, or a bookshelf, or a lampshade. And they want to get us to turn the light off and go back to sleep.

In fact, some people are even spending tremendous amounts of money to convince you of this – that’s not an artificial intelligence about to go into a hard takeoff, it’s just a tool that will be put to work in our economy. It’s just a machine, and machines are things we master.

But make no mistake: what we are dealing with is a real and mysterious creature, not a simple and predictable machine. — Read More

#strategy

MIT report: 95% of generative AI pilots at companies are failing

The GenAI Divide: State of AI in Business 2025a new report published by MIT’s NANDA initiative, reveals that while generative AI holds promise for enterprises, most initiatives to drive rapid revenue growth are falling flat.

Despite the rush to integrate powerful new models, about 5% of AI pilot programs achieve rapid revenue acceleration; the vast majority stall, delivering little to no measurable impact on P&L. The research—based on 150 interviews with leaders, a survey of 350 employees, and an analysis of 300 public AI deployments—paints a clear divide between success stories and stalled projects. — Read More

#strategy

Antifraud Company raises $5M for its AI whistleblower platform

The Antifraud Company is betting that AI and private-sector incentives can succeed where government has failed in clawing back billions in fraud.

A new startup with a bold pitch just came out of stealth with over $5 million in funding. The Antifraud Company is building what its founders call a “private-sector DOGE,” a reference to the government’s own short-lived Department of Government Efficiency. The goal is to use AI and investigative journalism to hunt down a slice of the estimated $500 billion lost to US government fraud each year.

The company, which has raised capital from Abstract Ventures, Browder Capital, and Dune Ventures, isn’t selling SaaS. Its business model is pure bounty hunting. The Antifraud Company finds fraud, reports it through official government whistleblower programs, and takes a cut—typically 10 to 30 percent—of whatever the government recovers. It’s a high-stakes, long-game approach, as payouts can take years to materialize. — Read More

#legal, #strategy

Welcome to STATE OF AI REPORT 2025

The State of AI Report is the most widely read and trusted analysis of key developments in AI. Published annually since 2018, the open-access report aims to spark informed conversation about the state of AI and what it means for the future. Produced by AI investor Nathan Benaich and Air Street Capital.

If 2024 was the year of consolidation, 2025 was the year reasoning got real. What began as a handful of “thinking” models has turned into a global competition to make machines that can plan, verify, and reflect. OpenAI, Google, Anthropic, and DeepSeek all released systems capable of reasoning through complex tasks, sparking one of the fastest research cycles the field has ever seen.

AI [now] acts as a force multiplier for technological progress in our increasingly digital, data-driven world. This is because everything around us, from culture to consumer products, is ultimately a product of intelligence. — Read More

#strategy

OpenAI’s Windows Play

… OpenAI is making a play to be the Windows of AI.

For nearly two decades smartphones, and in particular iOS, have been the touchstones in terms of discussing platforms. It’s important to note, however, that while Apple’s strategy of integrating hardware and software was immensely profitable, it entailed leaving the door open for a competing platform to emerge. The challenge of being a hardware company is that by virtue of needing to actually create devices you can’t serve everyone; Apple in particular didn’t have the capacity or desire to go downmarket, which created the opportunity for Android to not only establish a competing platform but to actually significantly exceed iOS in market share.

That means that if we want a historical analogy for total platform dominance — which increasingly appears to be OpenAI’s goal — we have to go back further to the PC era and Windows. — Read More

#strategy

YouTube Thinks AI Is Its Next Big Bang

Google figured out early on that video would be a great addition to its search business, so in 2005 it launched Google Video. Focused on making deals with the entertainment industry for second-rate content, and overly cautious on what users could upload, it flopped. Meanwhile, a tiny startup run by a handful of employees working above a San Mateo, California, pizzeria was exploding, simply by letting anyone upload their goofy videos and not worrying too much about who held copyrights to the clips. In 2006, Google snapped up that year-old company, figuring it would sort out the IP stuff later. (It did.) Though the $1.65 billion purchase price for YouTube was about a billion dollars more than its valuation, it was one of the greatest bargains ever. YouTube is now arguably the most successful video property in the world. It’s an industry leader in music and podcasting, and more than half of its viewing time is now on living room screens. It has paid out over $100 billion to creators since 2021. One estimate from MoffettNathanson analysts cited by Variety is that if it were a separate company, it might be worth $550 billion.

Now the service is taking what might be its biggest leap yet, embracing a new paradigm that could change its essence. I’m talking, of course, about AI. Since YouTube is still a wholly owned subsidiary of AI-obsessed Google, it’s not surprising that its anniversary product announcements this week touted AI features that will let creators use AI to enhance or produce videos. After all, Google Deepmind’s Veo 3 technology was YouTube’s for the taking. Ready or not, the video camera ultimately will be replaced by the prompt. This means a rethinking of YouTube’s superpower: authenticity. — Read More

#strategy

Becoming a Research Engineer at a Big LLM Lab — 18 Months of Strategic Job Hunting

A couple of days ago, I signed as a research engineer with Mistral, one of the few ML foundation model labs with more than a billion-dollar funding.

My excitement on Twitter found quite some resonance — partly in the form of questions for advice. Getting here was not an accident. I have strategically worked towards this outcome for an extended period, and I have a few things to share about what worked for me. In a sense, this blog post is a sequel to How to become an ML Engineer in 5 to 7 steps, where I covered my self-taught path toward becoming a machine learning engineer from a non-CS (though STEM) background. Here, I outline how I worked towards what I hope will be a career-defining role. I started this work after working in my first ML position for about a year.

This is an account of my personal experiences, which I based on advice I got from friends and found online. I don’t claim it’s original, and my sample is n=1, so cherry-pick what resonates for you. I still hope some find it useful. — Read More

#strategy