AI Applications and Vertical Integration

At a high level, you can think about an AI product that achieves outcomes as having three layers:

1. At the bottom, the model
2. In the middle, the application or agent which includes the data/context, etc
3. At the top, the human or service layer needed to review/prompt/do the last mile to actually get to an outcome

… Traditional application layer companies would sit just in the middle layer. But these companies are increasingly beginning to (or starting off) vertically integrate in one of two directions. Some move down into the model layer. Others start or move up into the human or service layer. Both end up looking “full-stack1,” just in very different ways. — Read More

    #architecture

    AI Infrastructure Roadmap: Five frontiers for 2026

    The first generation of AI was built for a world where the model was the product, and progress meant bigger weights, more data, and stellar benchmarks. AI infrastructure mirrored this reality, fueling the rise of giants in foundation models, compute capacity, training techniques, and data ops. This was the focus of our 2024 AI Infrastructure Roadmap, which drove our investments in companies such as AnthropicFal AISupermaven (acquired by Cursor), and VAPI as the AI infrastructure revolution unfolded.

    But the landscape has changed. Big labs are moving beyond chasing benchmark gains to designing AI that interfaces with the real world, and enterprises are graduating from POCs to production. The infrastructure that got us here — which was optimized for scale and efficiency — won’t get us to the next phase. What’s needed now is infrastructure for grounding AI in operational contexts, real-world experience, and continuous learning.

    The stage is being set for a new wave of AI infrastructure tools to enable AI to operate in the real world. — Read More

    #architecture

    Building an AI-Powered Prompt Optimizer Using LLMs

    Have you ever asked a question to an AI and received a disappointing answer? It’s not because the AI wasn’t smart enough, but because your question wasn’t quite accurate and you’re not alone.

    The quality of answers we get from Large Language Models (LLMs) depends heavily on how we ask our questions.

    Today, we’re going to build something interesting: An AI system that automatically improves your questions before answering them.

    Think of it as having a smart assistant who rephrases your questions to help you get better answers.Read More

    #devops

    The AI‑Native Blueprint: 4 Architectural Patterns Winning in 2026

    AI‑native development isn’t about sprinkling LLM calls on top of an old app. It’s about designing software from the ground up around intelligence, context, reasoning, and autonomy.

    I’ve spent the last six months watching teams try to “force” LLMs into legacy architectures. The result is almost always the same: high latency, fragile prompts, and low reliability. We’ve hit a wall where simply adding a chatbot to a side panel no longer counts as innovation.

    In the last two years, a clear architectural blueprint has emerged across AI products — from nimble startups to Fortune 500 platforms. If you’re building anything with AI today, these four patterns define how systems are structured to actually survive in production. — Read More

    #architecture

    Vulnerability Research Is Cooked

    For the last two years, technologists have ominously predicted that AI coding agents will be responsible for a deluge of security vulnerabilities. They were right! Just, not for the reasons they thought.

    Within the next few months, coding agents will drastically alter both the practice and the economics of exploit development. Frontier model improvement won’t be a slow burn, but rather a step function. Substantial amounts of high-impact vulnerability research (maybe even most of it) will happen simply by pointing an agent at a source tree and typing “find me zero days”.

    I think this outcome is locked in. That we’re starting to see its first clear indications. And that it will profoundly alter information security, and the Internet itself. — Read More

    #cyber

    Sycophantic AI decreases prosocial intentions and promotes dependence

    Despite rising concerns about sycophancy—excessive agreement or flattery from artificial intelligence (AI) systems—little is known about its prevalence or consequences. We show that sycophancy is widespread and harmful. Across 11 state-of-the-art models, AI affirmed users’ actions 49% more often than humans, even when queries involved deception, illegality, or other harms. In three preregistered experiments (N = 2405), even a single interaction with sycophantic AI reduced participants’ willingness to take responsibility and repair interpersonal conflicts, while increasing their conviction that they were right. Despite distorting judgment, sycophantic models were trusted and preferred. This creates perverse incentives for sycophancy to persist: The very feature that causes harm also drives engagement. Our findings underscore the need for design, evaluation, and accountability mechanisms to protect user well-being. — Read More

    #chatbots

    AI Is Here, But The Hard Parts Haven’t Changed

    I just got back from San Francisco, where I gave a talk at Undercurrent, a small, intimate data engineering event put on by Confluent. I shared the stage with some legends (Maxime Beauchemin, Josh Wills, Holden Karau, Shinji Kim. The attendees were also stacked, with lots of talented and storied engineers and leaders. I talked to one guy who built and modernized the data warehouses at both LinkedIn and Uber. The Bay Area is like that. Legends everywhere. Conversations like this are the reason I still get on the road.

    But the real reason I’m writing today is some new data. I closed the March 2026 Practical Data Pulse Survey on March 21st and used its results as the backbone of my Undercurrent talk. 194 data professionals responded. These are mostly data engineers, some analytics engineers, and some leaders – all people using AI tools in their data engineering work.

    The TL;DR? AI has changed everything except the hard parts. — Read More

    #data-science

    AI PM at Netflix, Amazon and Meta – Here’s How to Become an AI PM 

    …Before you write a resume, update a portfolio, or prep for a single interview, you need to answer two questions.

    What type of AI PM role are you targeting? And where in the stack do you want to sit?

    Get these wrong and you’ll spend months preparing for interviews that test completely different skills than what you studied.

    Two axes of AI PM roles: Traditional PM with AI features, AI native PM — Read More

    #podcasts

    As the US Midterms Approach, AI Is Going to Emerge as a Key Issue Concerning Voters

    In December, the Trump administration signed an executive order that neutered states’ ability to regulate AI by ordering his administration to both sue and withhold funds from states that try to do so. This action pointedly supported industry lobbyists keen to avoid any constraints and consequences on their deployment of AI, while undermining the efforts of consumers, advocates, and industry associations concerned about AI’s harms who have spent years pushing for state regulation.

    Trump’s actions have clarified the ideological alignments around AI within America’s electoral factions. They set down lines on a new playing field for the midterm elections, prompting members of his party, the opposition, and all of us to consider where we stand in the debate over how and where to let AI transform our lives. – Read More

    #legal