ARTIFICIAL GENERAL INTELLIGENCE AND THE FOURTH OFFSET

The recent strides toward artificial general intelligence (AGI)—AI systems surpassing human abilities across most cognitive tasks—have come from scaling “foundation models.” Their performance across tasks follows clear “scaling laws,” improving as a power law with model size, dataset size, and the amount of compute used to train the model.1 Continued investment in training compute and algorithmic innovations has driven a predictable rise in model capabilities.

In the manner that the architects of the atomic bomb postulated a “critical mass”—the amount of fissile material needed to maintain a chain reaction—we could conceive of a “critical scale” in AGI development, the point at which a foundation model automates its own research and development. A model at this scale would result in an equivalent research and development output to hundreds of millions of scientists and engineers—10,000 Manhattan Projects.2

This would amount to a “fourth offset,” a lead in the development of AGI-derived weapons, tactics, and operational methods. Applications would include unlimited cyber and information operations and potentially decisive left-of launch capabilities, from tracking and targeting ballistic missile submarines to—at the high end—developing impenetrable missile defense capable of negating nuclear weapons, providing the first nation to develop AGI with unprecedented national security policy options.

This means preventing the proliferation of foundation models at the critical scale would therefore also mean preventing the spread of AGI-derived novel weapons. This supposition raises the bar on the importance of counter-proliferation of the next stages of AGI components. AGI could also be used to support counter-proliferation strategy, providing the means needed to ensure models at this scale do not proliferate. This would cement the first-mover advantage in AGI development and, over time, compound this advantage into a fourth offset. — Read More

#china-vs-us