AI Risk Management Framework: Initial Draft

This initial draft of the Artificial Intelligence Risk Management Framework (AI RMF, or Framework) builds on the concept paper released in December 2021 and incorporates the feedback received. The AI RMF is intended for voluntary use in addressing risks in the design, development, use, and evaluation of AI products, services, and systems.

AI research and deployment is evolving rapidly. For that reason, the AI RMF and its companion documents will evolve over time. When AI RMF 1.0 is issued in January 2023, NIST, working with stakeholders, intends to have built out the remaining sections to reflect new knowledge, awareness, and practices.

Part I of the AI RMF sets the stage for why the AI RMF is important and explains its intended use and audience. Part II includes the AI RMF Core and Profiles. Part III includes a companion Practice Guide to assist in adopting the AI RMF.

That Practice Guide which will be released for comment includes additional examples and practices that can assist in using the AI RMF. The Guide will be part of a NIST AI Resource Center that is being established. Read More

#adversarial, #nist

NIST Proposes Method for Evaluating User Trust in Artificial Intelligence Systems

Can trust, one of the primary bases of relationships throughout history, be quantified and measured?

Illustration shows how people evaluating two different tasks performed by AI -- music selection and medical diagnosis -- might trust the AI varying amounts because the risk level of each task is different.

Every time you speak to a virtual assistant on your smartphone, you are talking to an artificial intelligence — an AI that can, for example, learn your taste in music and make song recommendations that improve based on your interactions. However, AI also assists us with more risk-fraught activities, such as helping doctors diagnose cancer. These are two very different scenarios, but the same issue permeates both: How do we humans decide whether or not to trust a machine’s recommendations?

This is the question that a new draft publication from the National Institute of Standards and Technology (NIST) poses, with the goal of stimulating a discussion about how humans trust AI systems. The document, Artificial Intelligence and User Trust (NISTIR 8332), is open for public comment until July 30, 2021. Read More

#nist, #trust

NIST Lays Out Roadmap for Developing Artificial Intelligence Standards

Federal standards for artificial intelligence must be strict enough to prevent the tech from harming humans, yet flexible enough to encourage innovation and get the tech industry on board, according to the National Institute of Standards and Technology.

However, without better standards for measuring the performance and trustworthiness of AI tools, officials said, the government could have a tough time striking that balance.

On Monday, NIST released its much-anticipated guidance on how the government should approach developing technical and ethical standards for artificial intelligence. Read More

#nist