Attacks against modern generative artificial intelligence (AI) large language models (LLMs) pose a real threat. Yet discussions around these attacks and their potential defenses are dangerously myopic. The dominant narrative focuses on “prompt injection,” a set of techniques to embed instructions into inputs to LLM intended to perform malicious activity. This term suggests a simple, singular vulnerability. This framing obscures a more complex and dangerous reality. Attacks on LLM-based systems have evolved into a distinct class of malware execution mechanisms, which we term “promptware.” In a new paper, we, the authors, propose a structured seven-step “promptware kill chain” to provide policymakers and security practitioners with the necessary vocabulary and framework to address the escalating AI threat landscape. — Read More
Author Archives: Rick's Cafe AI
Why I don’t think AGI is imminent
The CEOs of OpenAI and Anthropic have both claimed that human-level AI is just around the corner — and at times, that it’s already here. These claims have generated enormous public attention. There has been some technical scrutiny of these claims, but critiques rarely reach the public discourse. This piece is a sketch of my own thinking about the boundary of transformer-based large language models and human-level cognition. I have an MS degree in Machine Learning from over a decade ago, and I don’t work in the field of AI currently, but I am well-read on the underlying research. If you know more than I do about these topics, please reach out and let me know, I would love to develop my thinking on this further. — Read More
#strategyThe “AI Kills SaaS” Take Is Lazy. Here’s What’s Actually Happening.
HubSpot’s revenue is up 19%.
Xero is up 23%.
Atlassian is up 23%.
Figma is growing at 40%.
Adobe added another 11% to hit $23.8 billion.
And every single one of their stock prices has been absolutely destroyed this year.
Here is HubSpot. Currently at $228 down from a 52 week high of $881
…So what’s going on? The popular take is simple: AI has arrived, SaaS is dead, pack it up. … I wanted to go deeper. — Read More
Google identifies state-sponsored hackers using AI in attacks
State-sponsored hackers are exploiting highly-advanced tooling to accelerate their particular flavours of cyberattacks, with threat actors from Iran, North Korea, China, and Russia using models like Google’s Gemini to further their campaigns. They are able to craft sophisticated phishing campaigns and develop malware, according to a new report from Google’s Threat Intelligence Group (GTIG).
The quarterly AI Threat Tracker report, released today, reveals how government-backed attackers have begun to use artificial intelligence in the attack lifecycle – reconnaissance, social engineering, and eventually, malware development. This activity has become apparent thanks to the GTIG’s work during the final quarter of 2025.
“For government-backed threat actors, large language models have become essential tools for technical research, targeting, and the rapid generation of nuanced phishing lures,” GTIG researchers stated in their report. — Read More
Optimal Timing for Superintelligence
Developing superintelligence is not like playing Russian roulette; it is more like undergoing risky surgery for a condition that will otherwise prove fatal. We examine optimal timing from a person-affecting stance (and set aside simulation hypotheses and other arcane considerations). Models incorporating safety progress, temporal discounting, quality-of-life differentials, and concave QALY utilities suggest that even high catastrophe probabilities are often worth accepting. Prioritarian weighting further shortens timelines. For many parameter settings, the optimal strategy would involve moving quickly to AGI capability, then pausing briefly before full deployment: swift to harbor, slow to berth. But poorly implemented pauses could do more harm than good. — Read More
#singularityThe Concept Every AI Coder Learns Too Late
Have you ever spent hours debugging code that Claude had written 30 minutes before?
Exact same model, same chat, and same prompting. For some reason, Claude starts ignoring previous decisions you made together or ignores mentioned markdown files, only to then present blatantly incorrect suggestions.
You aren’t at fault here. Instead, you’re experiencing context rot. — Read More.
Something Big Is Happening
Think back to February 2020.
If you were paying close attention, you might have noticed a few people talking about a virus spreading overseas. But most of us weren’t paying close attention. The stock market was doing great, your kids were in school, you were going to restaurants and shaking hands and planning trips. If someone told you they were stockpiling toilet paper you would have thought they’d been spending too much time on a weird corner of the internet. Then, over the course of about three weeks, the entire world changed. Your office closed, your kids came home, and life rearranged itself into something you wouldn’t have believed if you’d described it to yourself a month earlier.
I think we’re in the “this seems overblown” phase of something much, much bigger than Covid.
I’ve spent six years building an AI startup and investing in the space. I live in this world. And I’m writing this for the people in my life who don’t… my family, my friends, the people I care about who keep asking me “so what’s the deal with AI?” and getting an answer that doesn’t do justice to what’s actually happening. I keep giving them the polite version. The cocktail-party version. Because the honest version sounds like I’ve lost my mind. And for a while, I told myself that was a good enough reason to keep what’s truly happening to myself. But the gap between what I’ve been saying and what is actually happening has gotten far too big. The people I care about deserve to hear what is coming, even if it sounds crazy.
… Most of us who work in AI are building on top of foundations we didn’t lay. We’re watching this unfold the same as you… we just happen to be close enough to feel the ground shake first.
But it’s time now. Not in an “eventually we should talk about this” way. In a “this is happening right now and I need you to understand it” way. – Read More
How Top 0.1% ChatGPT Users Actually Write Prompts (And How You Can Too)
Most people think good prompting means writing with context, but that’s not the only thing.
Top 1% ChatGPT users do something very different.
They don’t just ask questions to ChatGPT; they also control how the model thinks.
They design the thinking process for the model; as a result, the desired output comes naturally. — Read More
The Top Open-Source LLMs in 2026
For years, the narrative around large language models was simple: the most capable models lived behind APIs, and open-source alternatives trailed behind by a generation or two. Open models were good for experimentation, research, or cost-sensitive use cases — but not for serious, production-grade intelligence.
That narrative has collapsed. — Read More