In December 2023, we first covered The Four Wars of AI and the RAG/Ops War. After tens of millions poured into vector databases, ups and downs in the hype cycle, we finally have Jeff Huber from Chroma joining us today for the new hot take: “RAG” is dead…
and as context lengths increase, and more and more AI workloads are shifting from simple chatbots to IMPACTful agents, new work from thoughtleaders like Lance Martin and Dex Horthy are making genuine contributions of substance to the previously underrated context box. — Read More
Daily Archives: August 26, 2025
Do What? Teaching Vision-Language-Action Models to Reject the Impossible
Recently, Vision-Language-Action (VLA) models have demonstrated strong performance on a range of robotic tasks. These models rely on multimodal inputs, with language instructions playing a crucial role — not only in predicting actions, but also in robustly interpreting user intent, even when the requests are impossible to fulfill. In this work, we investigate how VLAs can recognize, interpret, and respond to false-premise instructions: natural language commands that reference objects or conditions absent from the environment. We propose Instruct-Verify-and-Act (IVA), a unified framework that (i) detects when an instruction cannot be executed due to a false premise, (ii) engages in language-based clarification or correction, and (iii) grounds plausible alternatives in perception and action. Towards this end, we construct a large-scale instruction tuning setup with structured language prompts and train a VLA model capable of handling both accurate and erroneous requests. Our approach leverages a contextually augmented, semi-synthetic dataset containing paired positive and false-premise instructions, enabling robust detection and natural language correction. Our experiments show that IVA improves false premise detection accuracy by 97.56% over baselines, while increasing successful responses in false-premise scenarios by 50.78%. — Read More
AGI is an Engineering Problem
We’ve reached an inflection point in AI development. The scaling laws that once promised ever-more-capable models are showing diminishing returns. GPT-5, Claude, and Gemini represent remarkable achievements, but they’re hitting asymptotes that brute-force scaling can’t solve. The path to artificial general intelligence isn’t through training ever-larger language models—it’s through building engineered systems that combine models, memory, context, and deterministic workflows into something greater than their parts.
Let me be blunt: AGI is an engineering problem, not a model training problem. — Read More
Not Everything Is an LLM: 8 AI Model Types You Need to Know in 2025
In 2023, if you said “AI”, most people thought of ChatGPT.
Fast-forward to 2025, and the landscape looks very different. LLMs (Large Language Models) may have ignited the AI revolution, but now we’re deep into an era of specialized AI models, each designed with a specific superpower.
Yet, somehow, everyone still calls them LLMs.
It’s like calling every vehicle a “car”, whether it’s a bicycle, a truck, or a plane. Sure, they all move, but they’re built for very different purposes. — Read More