The Department of War (DoW) today announced the implementation of a groundbreaking Cybersecurity Risk Management Construct (CSRMC), a transformative framework to deliver real-time cyber defense at operational speed. This five-phase construct ensures a hardened, verifiable, continuously monitored, and actively defended environment to ensure that U.S. warfighters maintain technological superiority against rapidly evolving and emerging cyber threats. — Read More
Tag Archives: DoD
How AI Is Eroding the Norms of War
Since 2022, I have reported on Russia’s full-scale invasion of Ukraine, witnessing firsthand the rapid evolution of technology on the battlefield. Embedded with drone units, I have seen how technology has evolved, with each side turning once-improvised tools into cutting-edge systems that dictate life and death.
In the early months of the war, Ukrainian soldiers relied on off-the-shelf drones for reconnaissance and support. As Russian forces developed countermeasures, the two sides entered a technological arms race. This cycle of innovation has transformed the battlefield, but it has also sparked a moral descent — a “race to the bottom” — in the rules of war.
In the effort to eke out an advantage, combatants are pushing ethical boundaries, eroding the norms of warfare. Troops disguise themselves in civilian clothing to evade drone detection, while autonomous targeting systems struggle to distinguish combatants from noncombatants.
The evolution of automated drone combat in Ukraine should be a cautionary tale for the rest of the world about the future of warfare. — Read More
Spy vs. AI
In the early 1950s, the United States faced a critical intelligence challenge in its burgeoning competition with the Soviet Union. Outdated German reconnaissance photos from World War II could no longer provide sufficient intelligence about Soviet military capabilities, and existing U.S. surveillance capabilities were no longer able to penetrate the Soviet Union’s closed airspace. This deficiency spurred an audacious moonshot initiative: the development of the U-2 reconnaissance aircraft. In only a few years, U-2 missions were delivering vital intelligence, capturing images of Soviet missile installations in Cuba and bringing near-real-time insights from behind the Iron Curtain to the Oval Office.
Today, the United States stands at a similar juncture. Competition between Washington and its rivals over the future of the global order is intensifying, and now, much as in the early 1950s, the United States must take advantage of its world-class private sector and ample capacity for innovation to outcompete its adversaries. The U.S. intelligence community must harness the country’s sources of strength to deliver insights to policymakers at the speed of today’s world. The integration of artificial intelligence, particularly through large language models, offers groundbreaking opportunities to improve intelligence operations and analysis, enabling the delivery of faster and more relevant support to decisionmakers. This technological revolution comes with significant downsides, however, especially as adversaries exploit similar advancements to uncover and counter U.S. intelligence operations. With an AI race underway, the United States must challenge itself to be first—first to benefit from AI, first to protect itself from enemies who might use the technology for ill, and first to use AI in line with the laws and values of a democracy.
For the U.S. national security community, fulfilling the promise and managing the peril of AI will require deep technological and cultural changes and a willingness to change the way agencies work. The U.S. intelligence and military communities can harness the potential of AI while mitigating its inherent risks, ensuring that the United States maintains its competitive edge in a rapidly evolving global landscape. Even as it does so, the United States must transparently convey to the American public, and to populations and partners around the world, how the country intends to ethically and safely use AI, in compliance with its laws and values. — Read More
OpenAI launches ChatGPT Gov, hoping to further government ties
OpenAI has announced a new more tailored version of ChatGPT called ChatGPT Gov, a service that the company said is meant to accelerate government use of the tool for non-public sensitive data.
In an announcement Tuesday, the company said that ChatGPT Gov, which can run in the Microsoft Azure commercial cloud or Azure Government cloud, will give federal agencies increased ability to use OpenAI frontier models. The product is also supposed to make it easier for agencies to follow certain cybersecurity and compliance requirements, while exploring potential applications of the technology, the announcement said.
Through ChatGPT Gov, federal agencies can use GPT-4o, along with a series of other OpenAI tools, and build custom search and chat systems developed by agencies. — Read More
CISA official: AI tools ‘need to have a human in the loop’
An abbreviated rundown of the Cybersecurity and Infrastructure Security Agency’s artificial intelligence work goes something like this: a dozen use cases, a pair of completed AI security tabletop exercises and a robust roadmap for how the technology should be used.
Lisa Einstein, who took over as CISA’s first chief AI officer in August and has played a critical role in each of those efforts, considers herself an optimist when it comes to the technology’s potential, particularly as it relates to cyber defenses. But speaking Wednesday at two separate events in Washington, D.C., Einstein mixed that optimism with a few doses of caution. — Read More
AI’s ‘Oppenheimer moment’: autonomous weapons enter the battlefield
The military use of AI-enabled weapons is growing, and the industry that provides them is booming
Asquad of soldiers is under attack and pinned down by rockets in the close quarters of urban combat. One of them makes a call over his radio, and within moments a fleet of small autonomous drones equipped with explosives fly through the town square, entering buildings and scanning for enemies before detonating on command. One by one the suicide drones seek out and kill their targets. A voiceover on the video, a fictional ad for multibillion-dollar Israeli weapons company Elbit Systems, touts the AI-enabled drones’ ability to “maximize lethality and combat tempo”.
While defense companies like Elbit promote their new advancements in artificial intelligence (AI) with sleek dramatizations, the technology they are developing is increasingly entering the real world. — Read More
Anduril Reveals ‘Pulsar’ family of AI-Learning Electronic Warfare Systems
On May 6, the defense company Anduril Industries revealed it had secretly developed a family of AI-enhanced electronic warfare systems called Pulsar already in operational use on multiple continents for —including two combat zones and with clients including the U.S. military.
… Pulsar is described as leveraging AI to recognize and adapt to never-before-seen threats, a traditional Achilles heel of AI. Like the Borg in Star Trek, it’s intended to rapidly identify and analyze unfamiliar threats (anomalous signals) and harness AI to rapidly devise a countermeasure. The resulting new threat data and countermeasures are then distributed across the network of Pulsar systems. — Read More
UKRAINE IS RIDDLED WITH LAND MINES. DRONES AND AI CAN HELP
EARLY ON A JUNE morning in 2023, my colleagues and I drove down a bumpy dirt road north of Kyiv in Ukraine. The Ukrainian Armed Forces were conducting training exercises nearby, and mortar shells arced through the sky. We arrived at a vast field for a technology demonstration set up by the United Nations. Across the 25-hectare field—that’s about the size of 62 American football fields—the U.N. workers had scattered 50 to 100 inert mines and other ordnance. Our task was to fly our drone over the area and use our machine learning software to detect as many as possible. And we had to turn in our results within 72 hours.
The scale was daunting: The area was 10 times as large as anything we’d attempted before with our drone demining startup, Safe Pro AI. My cofounder Gabriel Steinberg and I used flight-planning software to program a drone to cover the whole area with some overlap, taking photographs the whole time. It ended up taking the drone 5 hours to complete its task, and it came away with more than 15,000 images. Then we raced back to the hotel with the data it had collected and began an all-night coding session.
We were happy to see that our custom machine learning model took only about 2 hours to crunch through all the visual data and identify potential mines and ordnance. But constructing a map for the full area that included the specific coordinates of all the detected mines in under 72 hours was simply not possible with any reasonable computational resources. The following day (which happened to coincide with the short-lived Wagner Group rebellion), we rewrote our algorithms so that our system mapped only the locations where suspected land mines were identified—a more scalable solution for our future work. — Read More
USAF Test Pilot School and DARPA announce breakthrough in aerospace machine learning
The U.S. Air Force Test Pilot School and the Defense Advanced Research Projects Agency were finalists for the 2023 Robert J. Collier Trophy, a formal acknowledgement of recent breakthroughs that have launched the machine-learning era within the aerospace industry.
The teams worked together to test breakthrough executions in artificial intelligence algorithms using the X-62A VISTA aircraft as part of DARPA’s Air Combat Evolution (ACE) program.
… In less than a calendar year the teams went from the initial installation of live AI agents into the X-62A’s systems, to demonstrating the first AI versus human within-visual-range engagements, otherwise known as a dogfight. In total, the team made over 100,000 lines of flight-critical software changes across 21 test flights. — Read More
M-24-10: Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence
Artificial intelligence (AI) is one of the most powerful technologies of our time, and the
President has been clear that we must seize the opportunities AI presents while managing its
risks. Consistent with the AI in Government Act of 2020, the Advancing American AI Act,
and Executive Order 14110 on the Safe, Secure, and Trustworthy Development and Use of Artificial
Intelligence, this memorandum directs agencies to advance AI governance and innovation while
managing risks from the use of AI in the Federal Government, particularly those affecting the
rights and safety of the public. — Read More