By now, I’m sure you’ve heard that the Department of War has declared Anthropic a supply chain risk, because Anthropic refused to remove redlines around the use of their models for mass surveillance and for autonomous weapons.
Honestly I think this situation is a warning shot. Right now, LLMs are probably not being used in mission critical ways. But within 20 years, 99% of the workforce in the military, the government, and the private sector will be AIs. This includes the soldiers (by which I mean the robot armies), the superhumanly intelligent advisors and engineers, the police, you name it.
Our future civilization will run on AI labor. And as much as the government’s actions here piss me off, in a way I’m glad this episode happened – because it gives us the opportunity to think through some extremely important questions about who this future workforce will be accountable and aligned to, and who gets to determine that. — Read More
Tag Archives: DoD
Ex-Google PM Builds God’s Eye to Monitor Iran in 4D
13 thoughts on Anthropic, OpenAI and the Department of War
When I went to bed last night1, it appeared that Secretary of War Pete Hegseth (it still feels surreal to type that phrase) had potentially undermined American competitiveness by instructing the federal government not to use Claude and designating the company behind it, Anthropic, as a supply chain risk, a move that could force divestment in Anthropic from Nvidia, Amazon, Google and other companies that contract with the federal government. Was the military going to be stuck using Elon Musk’s Grok, a model that has its uses but is decidedly not on the lead lap and is reportedly considered too unreliable for classified settings?
Nope. Instead, I awoke to news that the Pentagon had reached an agreement with Anthropic rival OpenAI. (And also that we were bombing Iran.) This is at least a little bit more rational, which is not to say that you should feel happy about any of this. The story is complicated and is still developing; Anthropic will take its case to court and the government could TACO out. (For instance, by signing the deal with OpenAI but unbanning Claude.)
Nevertheless, the intersection of AI and politics falls squarely into the Silver Bulletin wheelhouse, something I’m sure we’ll be covering more and more. — Read More
Department of War Announces New Cybersecurity Risk Management Construct
The Department of War (DoW) today announced the implementation of a groundbreaking Cybersecurity Risk Management Construct (CSRMC), a transformative framework to deliver real-time cyber defense at operational speed. This five-phase construct ensures a hardened, verifiable, continuously monitored, and actively defended environment to ensure that U.S. warfighters maintain technological superiority against rapidly evolving and emerging cyber threats. — Read More
How AI Is Eroding the Norms of War
Since 2022, I have reported on Russia’s full-scale invasion of Ukraine, witnessing firsthand the rapid evolution of technology on the battlefield. Embedded with drone units, I have seen how technology has evolved, with each side turning once-improvised tools into cutting-edge systems that dictate life and death.
In the early months of the war, Ukrainian soldiers relied on off-the-shelf drones for reconnaissance and support. As Russian forces developed countermeasures, the two sides entered a technological arms race. This cycle of innovation has transformed the battlefield, but it has also sparked a moral descent — a “race to the bottom” — in the rules of war.
In the effort to eke out an advantage, combatants are pushing ethical boundaries, eroding the norms of warfare. Troops disguise themselves in civilian clothing to evade drone detection, while autonomous targeting systems struggle to distinguish combatants from noncombatants.
The evolution of automated drone combat in Ukraine should be a cautionary tale for the rest of the world about the future of warfare. — Read More
Spy vs. AI
In the early 1950s, the United States faced a critical intelligence challenge in its burgeoning competition with the Soviet Union. Outdated German reconnaissance photos from World War II could no longer provide sufficient intelligence about Soviet military capabilities, and existing U.S. surveillance capabilities were no longer able to penetrate the Soviet Union’s closed airspace. This deficiency spurred an audacious moonshot initiative: the development of the U-2 reconnaissance aircraft. In only a few years, U-2 missions were delivering vital intelligence, capturing images of Soviet missile installations in Cuba and bringing near-real-time insights from behind the Iron Curtain to the Oval Office.
Today, the United States stands at a similar juncture. Competition between Washington and its rivals over the future of the global order is intensifying, and now, much as in the early 1950s, the United States must take advantage of its world-class private sector and ample capacity for innovation to outcompete its adversaries. The U.S. intelligence community must harness the country’s sources of strength to deliver insights to policymakers at the speed of today’s world. The integration of artificial intelligence, particularly through large language models, offers groundbreaking opportunities to improve intelligence operations and analysis, enabling the delivery of faster and more relevant support to decisionmakers. This technological revolution comes with significant downsides, however, especially as adversaries exploit similar advancements to uncover and counter U.S. intelligence operations. With an AI race underway, the United States must challenge itself to be first—first to benefit from AI, first to protect itself from enemies who might use the technology for ill, and first to use AI in line with the laws and values of a democracy.
For the U.S. national security community, fulfilling the promise and managing the peril of AI will require deep technological and cultural changes and a willingness to change the way agencies work. The U.S. intelligence and military communities can harness the potential of AI while mitigating its inherent risks, ensuring that the United States maintains its competitive edge in a rapidly evolving global landscape. Even as it does so, the United States must transparently convey to the American public, and to populations and partners around the world, how the country intends to ethically and safely use AI, in compliance with its laws and values. — Read More
OpenAI launches ChatGPT Gov, hoping to further government ties
OpenAI has announced a new more tailored version of ChatGPT called ChatGPT Gov, a service that the company said is meant to accelerate government use of the tool for non-public sensitive data.
In an announcement Tuesday, the company said that ChatGPT Gov, which can run in the Microsoft Azure commercial cloud or Azure Government cloud, will give federal agencies increased ability to use OpenAI frontier models. The product is also supposed to make it easier for agencies to follow certain cybersecurity and compliance requirements, while exploring potential applications of the technology, the announcement said.
Through ChatGPT Gov, federal agencies can use GPT-4o, along with a series of other OpenAI tools, and build custom search and chat systems developed by agencies. — Read More
CISA official: AI tools ‘need to have a human in the loop’
An abbreviated rundown of the Cybersecurity and Infrastructure Security Agency’s artificial intelligence work goes something like this: a dozen use cases, a pair of completed AI security tabletop exercises and a robust roadmap for how the technology should be used.
Lisa Einstein, who took over as CISA’s first chief AI officer in August and has played a critical role in each of those efforts, considers herself an optimist when it comes to the technology’s potential, particularly as it relates to cyber defenses. But speaking Wednesday at two separate events in Washington, D.C., Einstein mixed that optimism with a few doses of caution. — Read More
AI’s ‘Oppenheimer moment’: autonomous weapons enter the battlefield
The military use of AI-enabled weapons is growing, and the industry that provides them is booming
Asquad of soldiers is under attack and pinned down by rockets in the close quarters of urban combat. One of them makes a call over his radio, and within moments a fleet of small autonomous drones equipped with explosives fly through the town square, entering buildings and scanning for enemies before detonating on command. One by one the suicide drones seek out and kill their targets. A voiceover on the video, a fictional ad for multibillion-dollar Israeli weapons company Elbit Systems, touts the AI-enabled drones’ ability to “maximize lethality and combat tempo”.
While defense companies like Elbit promote their new advancements in artificial intelligence (AI) with sleek dramatizations, the technology they are developing is increasingly entering the real world. — Read More
Anduril Reveals ‘Pulsar’ family of AI-Learning Electronic Warfare Systems
On May 6, the defense company Anduril Industries revealed it had secretly developed a family of AI-enhanced electronic warfare systems called Pulsar already in operational use on multiple continents for —including two combat zones and with clients including the U.S. military.
… Pulsar is described as leveraging AI to recognize and adapt to never-before-seen threats, a traditional Achilles heel of AI. Like the Borg in Star Trek, it’s intended to rapidly identify and analyze unfamiliar threats (anomalous signals) and harness AI to rapidly devise a countermeasure. The resulting new threat data and countermeasures are then distributed across the network of Pulsar systems. — Read More