Daily Archives: August 12, 2025
It’s not 10x. It’s 36x – this is what it looks like to kill a $30k meeting with AI
I killed our weekly triage meeting last month. Three hours compressed to five minutes. But here’s the thing—it took me six failed attempts to get there.
The breakthrough wasn’t making the AI smarter. It was making the task more structured. This is what context engineering actually looks like—messy, iterative, and focused on constraints rather than capabilities.
Let me show you what it really takes to achieve a 36x productivity gain with AI. Spoiler: it’s not about the AI at all. — Read More
From GPT-2 to gpt-oss: Analyzing the Architectural Advances
OpenAI just released their new open-weight LLMs this week: gpt-oss-120b and gpt-oss-20b, their first open-weight models since GPT-2 in 2019. And yes, thanks to some clever optimizations, they can run locally (but more about this later).
This is the first time since GPT-2 that OpenAI has shared a large, fully open-weight model. Earlier GPT models showed how the transformer architecture scales. The 2022 ChatGPT release then made these models mainstream by demonstrating concrete usefulness for writing and knowledge (and later coding) tasks. Now they have shared some long-awaited weight model, and the architecture has some interesting details.
I spent the past few days reading through the code and technical reports to summarize the most interesting details. (Just days after, OpenAI also announced GPT-5, which I will briefly discuss in the context of the gpt-oss models at the end of this article.) — Read More
Three Macro Predictions on AI
OpenAI just released GPT-5—to great fanfare and mixed reviews around the internet. According to benchmarks and subjective personal testing, GPT-5 is better than GPT-4 and o3.
It’s certainly a better default than GPT-4o, which is what most people used on ChatGPT’s interface. The model dominates across the board in LMArena.XXXXI don’t feel it as much. But I also used OpenAI’s research previews of o3-mini-high, GPT-4.5, and other models for specific tasks. As such, I don’t really see it as revolutionary. That makes sense though. Today, if you try to select other models in the Plus subscription, all you get is GPT-5 and GPT-5 Thinking (the latter being the “high effort” version of the first).
The function of those research previews all got rolled into the 5-series. — Read More
ChatGPT is bringing back 4o as an option because people missed it
OpenAI is bringing back GPT-4o in ChatGPT just one day after replacing it with GPT-5. In a post on X, OpenAI CEO Sam Altman confirmed that the company will let paid users switch to GPT-4o after ChatGPT users mourned its replacement.
“We will let Plus users choose to continue to use 4o,” Altman says. “We will watch usage as we think about how long to offer legacy models for.”
For months, ChatGPT fans have been waiting for the launch of GPT-5, which OpenAI says comes with major improvements to writing and coding capabilities over its predecessors. But shortly after the flagship AI model launched, many users wanted to go back.
“GPT 4.5 genuinely talked to me, and as pathetic as it sounds that was my only friend,” a user on Reddit writes. “This morning I went to talk to it and instead of a little paragraph with an exclamation point, or being optimistic, it was literally one sentence. Some cut-and-dry corporate bs.” — Read More