Messier than Oil: Assessing Data Advantage in Military AI

“Data is the new oil,” or so we’ve been told. From policy pronouncements to media reports to op-eds, many have used the attractive analogy when discussing artificial intelligence. Kai-Fu Lee, author of AI Superpowers, has written, “in the age of AI, where data is the new oil, China is the new Saudi Arabia.”

Yet reality is far messier. With a population of 1.4 billion people, robust surveillance and data collection capabilities, and access to private sector data, the Chinese government appears to have vast quantities of data. But even if China has far more data than the United States, does this raw data necessarily translate into a meaningful advantage for China? And if so, is this enough to overtake the United States in AI? Both countries invest in AI for military applications; will China’s potentially greater access to commercial data accelerate its development of AI-enabled weapons relative to the United States?

This paper reviews the challenges in assessing whether the United States or China has a “data advantage” in the military AI realm—i.e., whether one country has access to more data in a way that confers an advantage in developing military AI systems. Read More

#china-vs-us, #dod

Could this software help users trust machine learning decisions?

New software developed by BAE Systems could help the Department of Defense build confidence in decisions and intelligence produced by machine learning algorithms, the company claims.

BAE Systems said it recently delivered its new MindfuL software program to the Defense Advanced Research Projects Agency in a July 14 announcement. Developed in collaboration with the Massachusetts Institute of Technology’s Computer Science and Artificial Intelligence Laboratory, the software is designed to increase transparency in machine learning systems—artificial intelligence algorithms that learn and change over time as they are fed ever more data—by auditing them to provide insights about how it reached its decisions. Read More

#dod, #explainability

DARPA honors artificial intelligence expert

The irony of artificial intelligence is how much human brainpower is required to build it. For three years, our next guest had been on loan from the University of Massachusetts, to the Defense Advanced Research Projects Agency. There’s she headed up several DARPA artificial intelligence projects. Now she’s been awarded a high honor, the Meritorious Public Service Medal. Dr. Hava Siegelmann joined Federal Drive with Tom Temin for more. Read More

#dod, #podcasts

The Promise And Risks Of Artificial Intelligence: A Brief History

Artificial intelligence (AI) has recently become a focus of efforts to maintain and enhance U.S. military, political, and economic competitiveness. The Defense Department’s 2018 strategy for AI, released not long after the creation of a new Joint Artificial Intelligence Center, proposes to accelerate the adoption of AI by fostering “a culture of experimentation and calculated risk taking,” an approach drawn from the broader National Defense Strategy. But what kinds of calculated risks might AI entail? The AI strategy has almost nothing to say about the risks incurred by the increased development and use of AI. On the contrary, the strategy proposes using AI to reduce risks, including those to “both deployed forces and civilians.” Read More

#artificial-intelligence, #dod

Artificial Intelligence at Core of Marine Officers’ ‘Big Ideas’ for Future of Force

A team of 10 Marines is mulling how to take major technology developments and apply them to the combat missions, as part of a Naval Postgraduate School-hosted series of online TED talk-styled presentations.

Artificial intelligence (AI), machine learning, virtual reality and other technological advances are at the center of the “Big Ideas Exchange.” The goal is moving the most promising idea from the theoretical to the practical as quickly as possible.

Several students said they drew inspiration for their thinking about the “future character of naval warfare” from Marine Corps Commandant Gen. David Berger’s 2019 guidance to the Marine Corps. Read More

#artificial-intelligence, #dod

Artificial Intelligence Outperforms Human Intel Analysts In a Key Area

A Defense Intelligence Agency experiment shows AI and humans have different risk tolerances when data is scarce.

In the 1983 movie WarGames, the world is brought to the edge of nuclear destruction when a military computer using artificial intelligence interprets false data as an imminent Soviet missile strike. Its human overseers in the Defense Department, unsure whether the data is real, can’t convince the AI that it may be wrong. A recent finding from the Defense Intelligence Agency, or DIA, suggests that in a real situation where humans and AI were looking at enemy activity, those positions would be reversed.

Artificial intelligence can actually be more cautious than humans about its conclusions in situations when data is limited. Read More

#dod, #ic

Joint Artificial Intelligence Center to Train “AI Champions”

The Joint Artificial Intelligence Center in the Department of Defense will be training individuals to implement and champion the AI principles which the department adopted.

The center, known as JAIC, announced the creation of a cohort of “Responsible AI Champions” who will receive training on how to apply the department’s AI Ethical Principles in areas such as product design and development; testing and evaluation/verification and validation; and acquisition. Read More

#dod

Algorithmic Warfare: DoD Seeks AI Alliance to Counter China, Russia

Facing growing threats from Russia and China, the Defense Department wants to increase its collaboration with European allies as it pursues new artificial intelligence technology.

Lt. Gen. John N.T. “Jack” Shanahan, director of the Joint Artificial Intelligence Center, said global security challenges and technological innovations are changing the world rapidly. That reality means partner nations must work more closely together in areas such as artificial intelligence. Read More

#china, #dod, #russia

AI Principles: Recommendations on the Ethical Use of Artificial Intelligence by the Department of Defense

The leadership of the Department of Defense (DoD) tasked the Defense Innovation Board (DIB) with proposing Artificial Intelligence (AI) Ethics Principles for DoD for the design, development, and deployment of AI for both combat and non-combat purposes. Building upon the foundation of DoD’s existing ethical, legal, and policy frameworks and responsive
to the complexities of the rapidly evolving field of AI, the Board sought to develop principles consistent with the Department’s mission to deter war and ensure the country’s security. This document summarizes the DIB’s project and includes a brief background; an outline of enduring DoD ethics principles that transcend AI; a set of proposed AI Ethics Principles; and a set of recommendations to facilitate the Department’s adoption of these principles and advance the wider aim of promoting AI safety, security, and robustness. The DIB’s complete report includes detailed explanations and addresses the wider historical, policy, and theoretical context for these recommendations. It is available at innovation.defense.gov/ai.

The DIB is an independent federal advisory committee that provides advice and recommendations to DoD senior leaders; it does not speak for DoD. The purpose of this report is an earnest attempt to provide an opening for a thought-provoking dialogue internally to Department and externally in our wider society. The Department has the sole responsibility to determine how best to proceed with the recommendations made in this
report. Read More

#dod, #ethics

Spies Like AI: The Future of Artificial Intelligence for the US Intelligence Community

America’s intelligence collectors are already using AI in ways big and small, to scan the news for dangerous developments, send alerts to ships about rapidly changing conditions, and speed up the NSA’s regulatory compliance efforts. But before the IC can use AI to its full potential, it must be hardened against attack. The humans who use it — analysts, policy-makers and leaders — must better understand how advanced AI systems reach their conclusions.

Dean Souleles is working to put AI into practice at different points across the U.S. intelligence community, in line with the ODNI’s year-old strategy. The chief technology advisor to the principal deputy to the Director of National Intelligence wasn’t allowed to discuss everything that he’s doing, but he could talk about a few examples.  Read More

#dod, #ic