Google’s search results have undergone a seismic shift over the past year as AI fever has continued to escalate among the tech giants. Nowhere is this change more apparent than right at the top of Google’s storied results page, which is now home to AI Overviews. Google contends these Gemini-based answers don’t take traffic away from websites, but a new analysis from the Pew Research Center says otherwise. Its analysis shows that searches with AI summaries reduce clicks, and their prevalence is increasing.
Google began testing AI Overviews as the “search generative experience” in May 2023, and just a year later, they were an official part of the search engine results page (SERP). Many sites (including this one) have noticed changes to their traffic in the wake of this move, but Google has brushed off concerns about how this could affect the sites from which it collects all that data.
SEO experts have disagreed with Google’s stance on how AI affects web traffic, and the newly released Pew study backs them up. — Read More
Daily Archives: July 23, 2025
‘Another DeepSeek moment’: Chinese AI model Kimi K2 stirs excitement
Excitement is growing among researchers about another powerful artificial intelligence (AI) model to emerge from China, after DeepSeek shocked the world with its launch of R1 in January.
The performance of Kimi K2, launched on 11 July by Beijing-based company Moonshot AI, matches or surpasses that of Western rivals, as well as some DeepSeek models, across various benchmarks, according to the firm. In particular, it seems to excel at coding and scoring high in tests such as LiveCodeBench. — Read More
ARTIFICIAL GENERAL INTELLIGENCE AND THE FOURTH OFFSET
The recent strides toward artificial general intelligence (AGI)—AI systems surpassing human abilities across most cognitive tasks—have come from scaling “foundation models.” Their performance across tasks follows clear “scaling laws,” improving as a power law with model size, dataset size, and the amount of compute used to train the model.1 Continued investment in training compute and algorithmic innovations has driven a predictable rise in model capabilities.
In the manner that the architects of the atomic bomb postulated a “critical mass”—the amount of fissile material needed to maintain a chain reaction—we could conceive of a “critical scale” in AGI development, the point at which a foundation model automates its own research and development. A model at this scale would result in an equivalent research and development output to hundreds of millions of scientists and engineers—10,000 Manhattan Projects.2
This would amount to a “fourth offset,” a lead in the development of AGI-derived weapons, tactics, and operational methods. Applications would include unlimited cyber and information operations and potentially decisive left-of launch capabilities, from tracking and targeting ballistic missile submarines to—at the high end—developing impenetrable missile defense capable of negating nuclear weapons, providing the first nation to develop AGI with unprecedented national security policy options.
This means preventing the proliferation of foundation models at the critical scale would therefore also mean preventing the spread of AGI-derived novel weapons. This supposition raises the bar on the importance of counter-proliferation of the next stages of AGI components. AGI could also be used to support counter-proliferation strategy, providing the means needed to ensure models at this scale do not proliferate. This would cement the first-mover advantage in AGI development and, over time, compound this advantage into a fourth offset. — Read More