The US just released 10 principles that it hopes will make AI safer

The principles (with my translation) are:

  1. Public trust in AI. The government must promote reliable, robust, and trustworthy AI applications.
  2. Public participation. The public should have a chance to provide feedback in all stages of the rule-making process.
  3. Scientific integrity and information quality. Policy decisions should be based on science. 
  4. Risk assessment and management. Agencies should decide which risks are and aren’t acceptable.
  5. Benefits and costs. Agencies should weigh the societal impacts of all proposed regulations.
  6. Flexibility. Any approach should be able to adapt to rapid changes and updates to AI applications.
  7. Fairness and nondiscrimination. Agencies should make sure AI systems don’t discriminate illegally.
  8. Disclosure and transparency. The public will trust AI only if it knows when and how it is being used.
  9. Safety and security. Agencies should keep all data used by AI systems safe and secure.
  10. Interagency coordination. Agencies should talk to one another to be consistent and predictable in AI-related policies.

Read More

#dod, #ic

The Pentagon’s AI Chief Prepares for Battle

Nearly every day, in war zones around the world, American military forces request fire support. By radioing coordinates to a howitzer miles away, infantrymen can deliver the awful ruin of a 155-mm artillery shell on opposing forces. If defense officials in Washington have their way, artificial intelligence is about to make that process a whole lot faster.

The effort to speed up fire support is one of a handful initiatives that Lt. Gen. Jack Shanahan describes as the “lower consequence missions” that the Pentagon is using to demonstrate how it can integrate artificial intelligence into its weapons systems. As the head of the Joint Artificial Intelligence Center, a 140-person clearinghouse within the Department of Defense focused on speeding up AI adoption, Shanahan and his team are building applications in well-established AI domains—tools for predictive maintenance and health record analysis—but also venturing into the more exotic, pursuing AI capabilities that would make the technology a centerpiece of American warfighting. Read More

#dod

A DARPA Perspective on Artificial Intelligence

DARPA AI Next

DARPA envisions a future in which machines are more than just tools that execute human-programmed rules or generalize from human-curated data sets. Rather, the machines DARPA envisions will function more as colleagues than as tools.

DARPA sees three waves of AI:

— Handcrafted Knowledge
— Statistical Learning
— Contextual Adaptation

Read More

#dod

Artificial Intelligence and National Security — Updated November 21, 2019

Artificial intelligence (AI) is a rapidly growing field of technology with potentially significant implications for national security. As such, the U.S. Department of Defense (DOD) and other nations are developing AI applications for a range of military functions. AI research is underway in the fields of intelligence collection and analysis, logistics, cyber operations, information operations, command and control, and in a variety of semiautonomous and autonomous vehicles. Already, AI has been incorporated into military operations in Iraq and Syria. Congressional action has the potential to shape the technology’s development further, with budgetary and legislative decisions influencing the growth of military applications as well as the pace of their adoption.

AI technologies present unique challenges for military integration, particularly because the bulk of AI development is happening in the commercial sector. Although AI is not unique in this regard, the defense acquisition process may need to be adapted for acquiring emerging technologies like AI. In addition, many commercial AI applications must undergo significant modification prior to being functional for the military. A number of cultural issues also challenge AI acquisition, as some commercial AI companies are averse to partnering with DOD due to ethical concerns, and even within the department, there can be resistance to incorporating AI technology into existing weapons systems and processes. Read More

#dod, #ic

Preparing the Military for a Role on an Artificial Intelligence Battlefield

The Defense Innovation Board—an advisory committee of tech executives, scholars, and technologists—has unveiled its list of ethical principles for artificial intelligence (AI). If adopted by the Defense Department, then the recommendations will help shape the Pentagon’s use of AI in both combat and non-combat systems. The board’s principles are an important milestone that should be celebrated, but the real challenge of adoption and implementation is just beginning. For the principles to have an impact, the department will need strong leadership from the Joint AI Center (JAIC), buy-in from senior military leadership and outside groups, and additional technical expertise within the Defense Department.  Read More

#dod, #ethics, #ic

The US Army is creating robots that can follow orders

For robots to be useful teammates, they need to be able to understand what they’re told to do—and execute it with minimal supervision.

Military robots have always been pretty dumb. The PackBot the US Army uses for inspections and bomb disposal, for example, has practically no onboard intelligence and is piloted by remote control. What the Army has long wanted instead are intelligent robot teammates that can follow orders without constant supervision.   Read More

#dod, #robotics

Future Military Intelligence CONOPS and S&T Investment Roadmap 2035-2050: The Cognitive War

There are four major findings about operations critical to the effectiveness and success of future of intelligence operations in 2035-2050 and beyond. The findings apply broadly not only to military intelligence, but the greater Intelligence Community (IC), the DoD, and by default to several other elements of our federal national security framework. However, realizing the Future Intelligence CONOPS 2035-2050 projection assumes addressing the findings. If not, it highly probable intelligence operations will continue to mimic the current reactive posture of today.

The four findings:

— The IC and DoD, created in 1947, continue to function in a primarily reactive posture, using the industrial age processes of the era in which they were created.
— Information & democratization of technology has changed the character of warfare.
— 2018 National Security and Defense Security Strategies address the new character of warfare.
— Immediate investments are required to enable the success and effectiveness of future intelligence operations in 2035-2050 and beyond. Read More

#dod, #ic

National Security Commission on Artificial Intelligence (NSCAI): Initial Report

The National Security Commission on Artificial Intelligence — which is tasked with researching ways to advance the development of AI for national security and defense purposes — released its initial report to Congress July 31.

The panel has 15 members, led by Chairman Eric Schmidt, the former head of Google’s parent company Alphabet, and Vice Chairman Robert O. Work, a former deputy secretary of defense who served in the Obama administration. Read More

#china-vs-us, #dod, #ic

National Security Commission on Artificial Intelligence (NSCAI): Interim Report

In the report, the government-commissioned panel notes many times that China is investing more in AI and is taking advantage of the U.S. to “transfer AI know-how.” The report also says that AI infrastructure within the Department of Defense “is severely underdeveloped.” 

The Commission raised concerns about the progress China has made. The report also said the U.S. government still faces enormous work before it can transition AI from “a promising technological novelty into a mature technology integrated into core national security missions.” Read More

#china-vs-us, #dod, #ic

‘Tectonic shift’ of Space Command has intelligence community feeling aftershocks

Redefining space as a warfighting domain made waves throughout the defense community as they began thinking about defending assets in space. Maj. Gen. John Shaw, deputy commander for Air Force Space Command, called the creation of Space Command a “tectonic shift.” Now the aftershocks of that shift are being felt in the intelligence community as analysts have to reconsider space’s role in intelligence gathering.

“When you think of space and intelligence together, you might be like me: I spent my career thinking about intelligence collection in space coming down to the Earth, intelligence from space,” Shaw said on Agency in Focus: Intelligence Community. “We need to think really, really hard now about intelligence for space. Where is that intelligence expertise that processes the capabilities? We have to understand what’s actually happening in the space environment.” Read More

#dod, #ic, #podcasts