Google says it has pulled AI model Gemma from its AI Studio platform after a Republican senator complained the model, designed for developers, “fabricated serious criminal allegations” about her.
In a post on X, Google’s official news account said the company had “seen reports of non-developers trying to use Gemma in AI Studio and ask it factual questions.” AI Studio is a platform for developers and not a conventional way for regular consumers to access Google’s AI models. Gemma is specifically billed as a family of AI models for developers to use, with variants for medical use, coding, and evaluating text and image content. — Read More
Monthly Archives: November 2025
What’s up with Anthropic predicting AGI by early 2027?
As far as I’m aware, Anthropic is the only AI company with official AGI timelines[1]: they expect AGI by early 2027. In their recommendations (from March 2025) to the OSTP for the AI action plan they say:
As our CEO Dario Amodei writes in ‘Machines of Loving Grace’, we expect powerful AI systems will emerge in late 2026 or early 2027. Powerful AI systems will have the following properties:
Intellectual capabilities matching or exceeding that of Nobel Prize winners across most disciplines—including biology, computer science, mathematics, and engineering.
They often describe this capability level as a “country of geniuses in a datacenter”. — Read More
The Great Decoupling of Labor and Capital
Almost two decades ago, Hewlett-Packard (HP) was the first tech company to exceed $100 Billion annual revenue threshold in 2007. At that time, HP had 172k employees. The very next year, IBM joined the club, but IBM had almost 400k employees.
Today’s megacap tech companies all exhibit a common characteristics: their growth is pretty much decoupled from their headcount. Intuitively, this might not be a news to anyone, but when I sat down and carefully jotted the numbers, the extent of the decoupling even before Generative AI truly came to the scene was a bit astonishing to me. — Read More
Remote Labor Index:Measuring AI Automation of Remote Work
AIs have made rapid progress on research-oriented benchmarks of knowledge and reasoning, but it remains unclear how these gains translate into economic value and automation. To measure this, we introduce the Remote Labor Index (RLI), a broadly multi-sector benchmark comprising real-world, economically valuable projects designed to evaluate end-to-end agent performance in practical settings. AI agents perform near the floor on RLI, with the highest-performing agent achieving an automation rate of 2.5%. These results help ground discussions of AI automation in empirical evidence, setting a common basis for tracking AI impacts and enabling stakeholders to proactively navigate AI-driven labor automation. — Read More
#strategyCoca-Cola | Holidays are Coming, Behind the Scenes, Classical 2:42
The secret to sustainable AI may have been in our brains all along
Researchers have developed a new method for training artificial intelligence that dramatically improves its speed and energy efficiency by mimicking the structured wiring of the human brain. The approach, detailed in the journal Neurocomputing, creates AI models that can match or even exceed the accuracy of conventional networks while using a small fraction of the computational resources.
The study was motivated by a growing challenge in the field of artificial intelligence: sustainability. Modern AI systems, such as the large language models that power generative AI, have become enormous. They are built with billions of connections, and training them can require vast amounts of electricity and cost tens of millions of dollars. As these models continue to expand, their financial and environmental costs are becoming a significant concern. — Read More
The End of Cloud Inference
Most people picture the future of AI the same way they picture the internet: somewhere far away, inside giant buildings full of humming machines. Your phone or laptop sends a request, a distant data center does the thinking, and the answer streams back. That story has been useful for getting AI off the ground, but it’s not how it ends. For a lot of everyday tasks, the smartest place to run AI will be where the data already lives: on your device.
We already accept this in another computationally demanding field: graphics. No one renders every frame of a video game in a warehouse and streams the pixels to your screen. Your device does the heavy lifting locally because it’s faster, cheaper, and more responsive. AI is heading the same way. The cloud won’t disappear, but it will increasingly act like a helpful backup or “bigger battery,” not the default engine for everything. — Read More
AI Broke Interviews
Interviewing has always been a big can of worms in the software industry. For years, big tech has gone with the LeetCode style questions mixed with a few behavioural and system design rounds. Before that, it was brainteasers. I still remember how you would move Mount Fuji era. Tech has never really had good interviewing, but the question remains: how do you actually evaluate someone’s ability to reason about data structures and algorithms without asking algorithmic questions? I don’t know. People say engineers don’t need DSA. Perhaps. Nonetheless, it’s still taught in CS programs for a reason.
In real work, I’ve used data structure-y thinking maybe a handful of times, but I had more time to think. Literally days. Companies, on the other hand, need to make fast decisions. It’s not perfect. It was never perfect. But it worked well enough. Well, at least, I thought it did.
And then AI detonated the whole thing. Sure, people could technically cheat before. You could have friends feeding you hints or solving the problem with you but even that was limited. You needed friends who could code. And if your friend was good enough to help you, chances are you weren’t completely incompetent yourself. Cheating still filtered for a certain baseline. AI destroyed that filtration layer.
Everyone now has access to perfect code, perfect explanations, perfect system design diagrams, and even perfect behavioural answers. — Read More
Anonymous credentials: rate-limiting bots and agents without compromising privacy
The way we interact with the Internet is changing. Not long ago, ordering a pizza meant visiting a website, clicking through menus, and entering your payment details. Soon, you might just ask your phone to order a pizza that matches your preferences. A program on your device or on a remote server, which we call an AI agent, would visit the website and orchestrate the necessary steps on your behalf.
Of course, agents can do much more than order pizza. Soon we might use them to buy concert tickets, plan vacations, or even write, review, and merge pull requests. While some of these tasks will eventually run locally, for now, most are powered by massive AI models running in the biggest datacenters in the world. As agentic AI increases in popularity, we expect to see a large increase in traffic from these AI platforms and a corresponding drop in traffic from more conventional sources (like your phone).
This shift in traffic patterns has prompted us to assess how to keep our customers online and secure in the AI era. — Read More
Why You’ll Never Have a FAANG Data Infrastructure and That’s the Point | Part 1
This is Part 1 of a Series on FAANG data infrastructures. In this series, we’ll be breaking down the state-of-the-art designs, processes, and cultures that FAANGs or similar technology-first organisations have developed over decades. And in doing so, we’ll uncover why enterprises desire such infrastructures, whether these are feasible desires, and what the routes are through which we can map state-of-the-art outcomes without the decades invested or the millions spent in experimentation. This is an introductory piece, touching on the fundamental questions, and in the upcoming pieces, we’ll pick one FAANG at a time and break down the infrastructure to project common patterns and design principles, and illustrate replicable maps to the outcomes. — Read More
#data-science