Experts react: What Trump’s new AI Action Plan means for tech, energy, the economy, and more

“An industrial revolution, an information revolution, and a renaissance—all at once.” That’s how the Trump administration describes artificial intelligence (AI) in its new “AI Action Plan.” Released on Wednesday, the plan calls for cutting regulations to spur AI innovation and adoption, speeding up the buildout of AI data centers, exporting AI “full technology stacks” to US allies and partners, and ridding AI systems of what the White House calls “ideological bias.” How does the plan’s approach to AI policy differ from past US policy? What impacts will it have on the US AI industry and global AI governance? What are the implications for energy and the global economy? Our experts share their human-generated responses to these burning AI questions below. — Read More

#china-vs-us, #strategy

Surprising no one, new research says AI Overviews cause massive drop in search clicks

Google’s search results have undergone a seismic shift over the past year as AI fever has continued to escalate among the tech giants. Nowhere is this change more apparent than right at the top of Google’s storied results page, which is now home to AI Overviews. Google contends these Gemini-based answers don’t take traffic away from websites, but a new analysis from the Pew Research Center says otherwise. Its analysis shows that searches with AI summaries reduce clicks, and their prevalence is increasing.

Google began testing AI Overviews as the “search generative experience” in May 2023, and just a year later, they were an official part of the search engine results page (SERP). Many sites (including this one) have noticed changes to their traffic in the wake of this move, but Google has brushed off concerns about how this could affect the sites from which it collects all that data.

SEO experts have disagreed with Google’s stance on how AI affects web traffic, and the newly released Pew study backs them up. — Read More

#strategy

Reflections on OpenAI (Calvin French-Owen)

I left OpenAI three weeks ago. I had joined the company back in May 2024.

I wanted to share my reflections because there’s a lot of smoke and noise around what OpenAI is doing, but not a lot of first-hand accounts of what the culture of working there actually feels like.

Nabeel Quereshi has an amazing post called Reflections on Palantir, where he ruminates on what made Palantir special. I wanted to do the same for OpenAI while it’s fresh in my mind. You won’t find any trade secrets here, more just reflections on this current iteration of one of the most fascinating organizations in history at an extremely interesting time. — Read More

#strategy

hypercapitalism and the AI talent wars

Meta’s multi-hundred million dollar comp offers and Google’s multi-billion dollar Character AI and Windsurf deals signal that we are in a crazy AI talent bubble.

The talent mania could fizzle out as the winners and losers of the AI war emerge, but it represents a new normal for the foreseeable future. If the top 1% of companies drive the majority of VC returns, why shouldn’t the same apply to talent? Our natural egalitarian bias makes this unpalatable to accept, but the 10x engineer meme doesn’t go far enough – there are clearly people that are 1,000x the baseline impact.

This inequality certainly manifests at the founder level (Founders Fund exists for a reason), but applies to employees too. Key people have driven billions of dollars in value – look at Jony Ive’s contribution to the iPhone, or Jeff Dean’s implementation of distributed systems at Google, or Andy Jassy’s incubation of AWS. — Read More

#strategy

Why I don’t think AGI is right around the corner

Sometimes people say that even if all AI progress totally stopped, the systems of today would still be far more economically transformative than the internet. I disagree. I think the LLMs of today are magical. But the reason that the Fortune 500 aren’t using them to transform their workflows isn’t because the management is too stodgy. Rather, I think it’s genuinely hard to get normal humanlike labor out of LLMs. And this has to do with some fundamental capabilities these models lack.

I like to think I’m “AI forward” here at the Dwarkesh Podcast. I’ve probably spent over a hundred hours trying to build little LLM tools for my post production setup. And the experience of trying to get them to be useful has extended my timelines. I’ll try to get the LLMs to rewrite autogenerated transcripts for readability the way a human would. Or I’ll try to get them to identify clips from the transcript to tweet out. Sometimes I’ll try to get them to co-write an essay with me, passage by passage. These are simple, self contained, short horizon, language in-language out tasks – the kinds of assignments that should be dead center in the LLMs’ repertoire. And they’re 5/10 at them. Don’t get me wrong, that’s impressive.

But the fundamental problem is that LLMs don’t get better over time the way a human would. The lack of continual learning is a huge huge problem. The LLM baseline at many tasks might be higher than an average human’s. But there’s no way to give a model high level feedback. You’re stuck with the abilities you get out of the box. You can keep messing around with the system prompt. In practice this just doesn’t produce anything even close to the kind of learning and improvement that human employees experience. — Read More

#strategy

The rise of the AI-native employee

I’ve been at Lovable for five weeks and yeah… I’m not in Kansas anymore. This company operates on a completely different level – and as someone who’s spent my entire career in traditional tech, I’m seeing a very different pattern here that’s worth sharing.

Lovable is blowing past crazy revenue milestones: $1M ARR in just 8 days post-launch, $17M in 3 months, $60M in 6 months, and $80M ARR in just 7 months. With ~35 people. That’s not a typo. That’s the new normal – if you’re AI-native. And I don’t just mean the product is AI-native. I mean the people are. — Read More

#strategy

The End of Moore’s Law for AI? Gemini Flash Offers a Warning

For the past few years, the AI industry has operated under its own version of Moore’s Law: an unwavering belief that the cost of intelligence would perpetually decrease by orders of magnitude each year. Like clockwork, each new model generation promised to be not only more capable but also cheaper to run. Last week, Google quietly broke that trend.

In a move that at first went unnoticed, Google significantly increased the price of its popular Gemini 2.5 Flash model. The input token price doubled from $0.15 to $0.30 per million tokens, while the output price more than quadrupled from $0.60 to $2.50 per million. Simultaneously, they introduced a new, less capable model, “Gemini 2.5 Flash Lite”, at a lower price point.

This is the first time a major provider has backtracked on the price of an established model. While it may seem like a simple adjustment, we believe this signals a turning point. The industry is no longer on an endless downward slide of cost. Instead, we’ve hit a fundamental soft floor on the cost of intelligence, given the current state of hardware and software. — Read More

#strategy

AI virtual personality YouTubers, or ‘VTubers,’ are earning millions

One of the most popular gaming YouTubers is named Bloo, but he isn’t a human — he’s a VTuber, a fully virtual personality powered by artificial intelligence.

VTubers first gained traction in Japan in the 2010s. Now, advances in AI are making it easier than ever to create VTubers, fueling a new wave of virtual creators on YouTube.

As AI-generated content becomes more common online, concerns about its impact are growing, especially as it becomes easier to generate convincing but entirely AI-fabricated videos. — Read More

How to Become a VTuber

#strategy

#vfx

Project Vend: Can Claude run a small shop? (And why does that matter?)

We let Claude manage an automated store in our office as a small business for about a month. We learned a lot from how close it was to success—and the curious ways that it failed—about the plausible, strange, not-too-distant future in which AI models are autonomously running things in the real economy. — Read More

#strategy

Machines of Faithful Obedience

Throughout history, technological and scientific advances have had both good and ill effects, but their overall impact has been overwhelmingly positive. Thanks to scientific progress, most people on earth live longer, healthier, and better than they did centuries or even decades ago.

I believe that AI (including AGI and ASI) can do the same and be a positive force for humanity. I also believe that it is possible to solve the “technical alignment” problem and build AIs that follow the words and intent of our instructions and report faithfully on their actions and observations.

… In the next decade, AI progress will be extremely rapid, and such periods of sharp transition can be risky.  What we — in industry, academia, and government—  do in the coming years will matter a lot to ensure that AI’s benefits far outweigh its costs. — Read More

#strategy