The Architecture Of Local-First Web Development

Last October, I was sitting in a hotel room in Lisbon, the night before I was supposed to demo a project management tool my team had spent four months building. The hotel Wi-Fi was doing that thing where it connects but nothing actually loads. And I watched our app, this thing I was genuinely proud of, render a blank screen with a spinner. Then a timeout error. Then nothing.

I pulled out my phone, tethered to cellular, and got a shaky connection. The app loaded, but every click was a two-second wait. Create a task? Spinner. Move a task between columns? Spinner. I sat there thinking: we built a front end in React, a back end in Node, a Postgres database, a Redis cache, a GraphQL API with six resolvers just for the task board. All that infrastructure, and the damn thing can’t show me my own data without a round-trip to a server 3,000 miles away.

That was the night I started seriously looking at local-first architecture. Not because I read a blog post or saw a tweet. Because I was embarrassed. — Read More

#architecture

Notes from inside China’s AI labs

The Chinese companies building language models are set up as the perfect fast-followers for the technology, building on long-standing cultural traditions in education and work, along with subtly different approaches to building technology companies. When you look at the outputs, the latest, biggest models enabling agentic workflows, and the ingredients, excellent scientists, large-scale data, and accelerated computing, the Chinese and American labs look largely similar. The lasting differences emerge in how these are organized and conditioned.

long thought that a reason that the Chinese labs are so good at catching up and keeping up with the frontier is that they’re culturally aligned for this task, but without talking to people directly I felt like it wasn’t my place to attribute substantial influence to this hunch. Speaking with many wonderful, humble, and open scientists at the leading Chinese labs has crystallized a lot of my beliefs. — Read More

#china-ai

N-Day Research with AI: Using Ollama and n8n

I have been working on N-day research for the past year, focusing specifically on Microsoft components. During this time, I developed several tools to support and streamline my research.

… Since there is a growing trend toward AI-driven analysis, I wanted to evaluate whether an AI model could analyze patched and vulnerable functions and independently identify the underlying vulnerability. This approach could be especially useful for initial triage and enabling faster analysis.

So, I decided to experiment with the tools I already have and extend my workflow further. I started by deploying a local LLM and building from there. — Read More

#cyber

The AWS MCP Server is now generally available

I have been building with AI agents and MCP tools for a while now, and one question kept coming up: how do you give an agent real, authenticated access to AWS without handing it the keys to the kingdom? Today, there is an answer.

I’m happy to announce the general availability of the AWS MCP Server, a managed remote Model Context Protocol (MCP) server that gives AI agents and coding assistants secure, authenticated access to all AWS services through a small, fixed set of tools. — Read More

#devops

What’s new in IAM: Security, governance, and runtime defense

The AI era demands a fundamental shift in security, and that includes identity and access management (IAM). Traditional controls simply aren’t built for autonomous AI agents that interact with sensitive data at machine speed, a reality we address with our new IAM advancements for the agentic enterprise era.

Engineered as built-in Google Cloud capabilities to secure the rapidly-expanding world of AI agents, at Google Cloud Next we introduced a new security and governance paradigm for managing agent identity and access. This comprehensive framework focuses on foundational Agent Identity and an Agent Gateway with Identity-Aware Proxy, while integrating robust agent access management, agent guardrails, and runtime defense to enable a secure cloud environment for your organization. — Read More

#devops

A 23-Year-Old Swedish Dropout Cracked OpenAI

…[Gabriel Petersson] is a researcher at OpenAI, working on the Sora team — the people building the AI video models that are currently blowing everyone’s minds.

He didn’t get there because he had the right connections or a shiny Ivy League diploma.

He got there because he realized something early on that most of us take decades to figure out: School was just a side quest. Yes you heard right, he took school as a side quest. Read More

#strategy

How the Internet Dies

For years, “Dead Internet Theory” was framed as something done to us. Foreign bot armies. State-sponsored troll farms. Algorithmic propaganda flooding social media from the outside. A clean external villain you could point at, sanction, and try to take down.

That’s not really what’s happening. The platforms are doing it to themselves. Sometimes through outright bad-faith decisions. More often, through the kind of strategic confusion that looks identical to bad-faith from the outside.

In the last two years, four of the most human-feeling corners of the web (PinterestRedditSteamDiscord) have each made a series of decisions that are gutting the very thing that made them work. Some of those decisions are pure villainy. A lot of them, honestly, are just woefully bad judgment dressed up as strategy. The result is the same either way.

This isn’t really a “the internet is dying” piece. I genuinely don’t know what the internet looks like in five years. I’m pretty sure it’ll be very different from what we have today. But the mechanism is now visible enough that it’s worth writing down, because once you see the pattern, you can’t unsee it on whichever platform you open next. — Read More

#strategy

Demis Hassabis: Agents, AGI & The Next Big Scientific Breakthroug

Read More
#videos

Behind the Scenes Hardening Firefox with Claude Mythos Preview

Two weeks ago we announced that we had identified and fixed an unprecedented number of latent security bugs in Firefox with the help of Claude Mythos Preview and other AI models. In this post, we’ll go into more detail about how we approached this work, what we found, and advice for other projects on making good use of emerging capabilities to harden themselves against attack.

Just a few months ago, AI-generated security bug reports to open source projects were mostly known for being unwanted slop. Dealing with reports that look plausibly correct but are wrong imposes an asymmetric cost on project maintainers: it’s cheap and easy to prompt an LLM to find a “problem” in code, but slow and expensive to respond to it.

It is difficult to overstate how much this dynamic changed for us over a few short months. This was due to a combination of two main factors. First, the models got a lot more capable. Second, we dramatically improved our techniques for harnessing these models — steering them, scaling them, and stacking them to generate large amounts of signal and filter out the noise. — Read More

#cyber