Joint Artificial Intelligence Center to Train “AI Champions”

The Joint Artificial Intelligence Center in the Department of Defense will be training individuals to implement and champion the AI principles which the department adopted.

The center, known as JAIC, announced the creation of a cohort of “Responsible AI Champions” who will receive training on how to apply the department’s AI Ethical Principles in areas such as product design and development; testing and evaluation/verification and validation; and acquisition. Read More

#dod

Algorithmic Warfare: DoD Seeks AI Alliance to Counter China, Russia

Facing growing threats from Russia and China, the Defense Department wants to increase its collaboration with European allies as it pursues new artificial intelligence technology.

Lt. Gen. John N.T. “Jack” Shanahan, director of the Joint Artificial Intelligence Center, said global security challenges and technological innovations are changing the world rapidly. That reality means partner nations must work more closely together in areas such as artificial intelligence. Read More

#china, #dod, #russia

AI Principles: Recommendations on the Ethical Use of Artificial Intelligence by the Department of Defense

The leadership of the Department of Defense (DoD) tasked the Defense Innovation Board (DIB) with proposing Artificial Intelligence (AI) Ethics Principles for DoD for the design, development, and deployment of AI for both combat and non-combat purposes. Building upon the foundation of DoD’s existing ethical, legal, and policy frameworks and responsive
to the complexities of the rapidly evolving field of AI, the Board sought to develop principles consistent with the Department’s mission to deter war and ensure the country’s security. This document summarizes the DIB’s project and includes a brief background; an outline of enduring DoD ethics principles that transcend AI; a set of proposed AI Ethics Principles; and a set of recommendations to facilitate the Department’s adoption of these principles and advance the wider aim of promoting AI safety, security, and robustness. The DIB’s complete report includes detailed explanations and addresses the wider historical, policy, and theoretical context for these recommendations. It is available at innovation.defense.gov/ai.

The DIB is an independent federal advisory committee that provides advice and recommendations to DoD senior leaders; it does not speak for DoD. The purpose of this report is an earnest attempt to provide an opening for a thought-provoking dialogue internally to Department and externally in our wider society. The Department has the sole responsibility to determine how best to proceed with the recommendations made in this
report. Read More

#dod, #ethics

Spies Like AI: The Future of Artificial Intelligence for the US Intelligence Community

America’s intelligence collectors are already using AI in ways big and small, to scan the news for dangerous developments, send alerts to ships about rapidly changing conditions, and speed up the NSA’s regulatory compliance efforts. But before the IC can use AI to its full potential, it must be hardened against attack. The humans who use it — analysts, policy-makers and leaders — must better understand how advanced AI systems reach their conclusions.

Dean Souleles is working to put AI into practice at different points across the U.S. intelligence community, in line with the ODNI’s year-old strategy. The chief technology advisor to the principal deputy to the Director of National Intelligence wasn’t allowed to discuss everything that he’s doing, but he could talk about a few examples.  Read More

#dod, #ic

The US just released 10 principles that it hopes will make AI safer

The principles (with my translation) are:

  1. Public trust in AI. The government must promote reliable, robust, and trustworthy AI applications.
  2. Public participation. The public should have a chance to provide feedback in all stages of the rule-making process.
  3. Scientific integrity and information quality. Policy decisions should be based on science. 
  4. Risk assessment and management. Agencies should decide which risks are and aren’t acceptable.
  5. Benefits and costs. Agencies should weigh the societal impacts of all proposed regulations.
  6. Flexibility. Any approach should be able to adapt to rapid changes and updates to AI applications.
  7. Fairness and nondiscrimination. Agencies should make sure AI systems don’t discriminate illegally.
  8. Disclosure and transparency. The public will trust AI only if it knows when and how it is being used.
  9. Safety and security. Agencies should keep all data used by AI systems safe and secure.
  10. Interagency coordination. Agencies should talk to one another to be consistent and predictable in AI-related policies.

Read More

#dod, #ic

The Pentagon’s AI Chief Prepares for Battle

Nearly every day, in war zones around the world, American military forces request fire support. By radioing coordinates to a howitzer miles away, infantrymen can deliver the awful ruin of a 155-mm artillery shell on opposing forces. If defense officials in Washington have their way, artificial intelligence is about to make that process a whole lot faster.

The effort to speed up fire support is one of a handful initiatives that Lt. Gen. Jack Shanahan describes as the “lower consequence missions” that the Pentagon is using to demonstrate how it can integrate artificial intelligence into its weapons systems. As the head of the Joint Artificial Intelligence Center, a 140-person clearinghouse within the Department of Defense focused on speeding up AI adoption, Shanahan and his team are building applications in well-established AI domains—tools for predictive maintenance and health record analysis—but also venturing into the more exotic, pursuing AI capabilities that would make the technology a centerpiece of American warfighting. Read More

#dod

A DARPA Perspective on Artificial Intelligence

DARPA AI Next

DARPA envisions a future in which machines are more than just tools that execute human-programmed rules or generalize from human-curated data sets. Rather, the machines DARPA envisions will function more as colleagues than as tools.

DARPA sees three waves of AI:

— Handcrafted Knowledge
— Statistical Learning
— Contextual Adaptation

Read More

#dod

Artificial Intelligence and National Security — Updated November 21, 2019

Artificial intelligence (AI) is a rapidly growing field of technology with potentially significant implications for national security. As such, the U.S. Department of Defense (DOD) and other nations are developing AI applications for a range of military functions. AI research is underway in the fields of intelligence collection and analysis, logistics, cyber operations, information operations, command and control, and in a variety of semiautonomous and autonomous vehicles. Already, AI has been incorporated into military operations in Iraq and Syria. Congressional action has the potential to shape the technology’s development further, with budgetary and legislative decisions influencing the growth of military applications as well as the pace of their adoption.

AI technologies present unique challenges for military integration, particularly because the bulk of AI development is happening in the commercial sector. Although AI is not unique in this regard, the defense acquisition process may need to be adapted for acquiring emerging technologies like AI. In addition, many commercial AI applications must undergo significant modification prior to being functional for the military. A number of cultural issues also challenge AI acquisition, as some commercial AI companies are averse to partnering with DOD due to ethical concerns, and even within the department, there can be resistance to incorporating AI technology into existing weapons systems and processes. Read More

#dod, #ic

Preparing the Military for a Role on an Artificial Intelligence Battlefield

The Defense Innovation Board—an advisory committee of tech executives, scholars, and technologists—has unveiled its list of ethical principles for artificial intelligence (AI). If adopted by the Defense Department, then the recommendations will help shape the Pentagon’s use of AI in both combat and non-combat systems. The board’s principles are an important milestone that should be celebrated, but the real challenge of adoption and implementation is just beginning. For the principles to have an impact, the department will need strong leadership from the Joint AI Center (JAIC), buy-in from senior military leadership and outside groups, and additional technical expertise within the Defense Department.  Read More

#dod, #ethics, #ic

The US Army is creating robots that can follow orders

For robots to be useful teammates, they need to be able to understand what they’re told to do—and execute it with minimal supervision.

Military robots have always been pretty dumb. The PackBot the US Army uses for inspections and bomb disposal, for example, has practically no onboard intelligence and is piloted by remote control. What the Army has long wanted instead are intelligent robot teammates that can follow orders without constant supervision.   Read More

#dod, #robotics