In the early 1950s, the United States faced a critical intelligence challenge in its burgeoning competition with the Soviet Union. Outdated German reconnaissance photos from World War II could no longer provide sufficient intelligence about Soviet military capabilities, and existing U.S. surveillance capabilities were no longer able to penetrate the Soviet Union’s closed airspace. This deficiency spurred an audacious moonshot initiative: the development of the U-2 reconnaissance aircraft. In only a few years, U-2 missions were delivering vital intelligence, capturing images of Soviet missile installations in Cuba and bringing near-real-time insights from behind the Iron Curtain to the Oval Office.
Today, the United States stands at a similar juncture. Competition between Washington and its rivals over the future of the global order is intensifying, and now, much as in the early 1950s, the United States must take advantage of its world-class private sector and ample capacity for innovation to outcompete its adversaries. The U.S. intelligence community must harness the country’s sources of strength to deliver insights to policymakers at the speed of today’s world. The integration of artificial intelligence, particularly through large language models, offers groundbreaking opportunities to improve intelligence operations and analysis, enabling the delivery of faster and more relevant support to decisionmakers. This technological revolution comes with significant downsides, however, especially as adversaries exploit similar advancements to uncover and counter U.S. intelligence operations. With an AI race underway, the United States must challenge itself to be first—first to benefit from AI, first to protect itself from enemies who might use the technology for ill, and first to use AI in line with the laws and values of a democracy.
For the U.S. national security community, fulfilling the promise and managing the peril of AI will require deep technological and cultural changes and a willingness to change the way agencies work. The U.S. intelligence and military communities can harness the potential of AI while mitigating its inherent risks, ensuring that the United States maintains its competitive edge in a rapidly evolving global landscape. Even as it does so, the United States must transparently convey to the American public, and to populations and partners around the world, how the country intends to ethically and safely use AI, in compliance with its laws and values. — Read More
Daily Archives: January 31, 2025
The AI guys were lying the whole time
Last week, a Chinese startup called DeepSeek launched their r1 generative-AI model via a free app that is now sitting atop the iOS App Store. Egg-shaped tech investor and former Clubhouse influencer Marc Andreessen called DeepSeek r1, “AI’s Sputnik moment” in an X post Sunday.
And, yes, it is causing a lot of panic. AI and chip manufacturer stocks are in free fall this morning as the market reacts to DeepSeek, which is both open source and basically as good as ChatGPT. Chip manufacturer Nvidia had the biggest market loss in history today and DeepSeek is also being targeted by a cyber attack. But if you’re looking for a real break down of what DeepSeek can’t do that ChatGPT can, it’s a lot of quality of life stuff. It can’t generate images, can’t talk to you, doesn’t support third party plugins, and doesn’t have “vision” like ChatGPT does. (I’ve actually been using that last feature recently to troubleshoot what’s wrong with my cactuses lol.) All that said, on Monday, DeepSeek released an open-source image generator called Janus-Pro-7B that is, once again, as good, if not better, than OpenAI’s DALL-E 3.
Limitations aside, the fact DeepSeek is essentially free, costing cents to use its API, open source, and was reportedly created by a team for only around $5 million (if you believe that) has, as Fast Company put it, raised “several existential questions for America’s tech giants.” Or as noted AI evangelist and OpenAI superfan Ed Zitron wrote on Bluesky this morning, “The AI bubble was inflated based on the idea that we need bigger models that both are trained and run on bigger and even larger GPUs. A company came along that has undermined the narrative — ways both substantive and questionable.” — Read More