AI Risk Management Framework: Initial Draft

This initial draft of the Artificial Intelligence Risk Management Framework (AI RMF, or Framework) builds on the concept paper released in December 2021 and incorporates the feedback received. The AI RMF is intended for voluntary use in addressing risks in the design, development, use, and evaluation of AI products, services, and systems.

AI research and deployment is evolving rapidly. For that reason, the AI RMF and its companion documents will evolve over time. When AI RMF 1.0 is issued in January 2023, NIST, working with stakeholders, intends to have built out the remaining sections to reflect new knowledge, awareness, and practices.

Part I of the AI RMF sets the stage for why the AI RMF is important and explains its intended use and audience. Part II includes the AI RMF Core and Profiles. Part III includes a companion Practice Guide to assist in adopting the AI RMF.

That Practice Guide which will be released for comment includes additional examples and practices that can assist in using the AI RMF. The Guide will be part of a NIST AI Resource Center that is being established. Read More

#adversarial, #nist

Google: Machine Or AI Generated Content Still Not High Quality

For the past several years, Google has been saying that when machine generated or AI generated content becomes high quality, it might be something that Google allows within its search webmaster guidelines. Well, in 2022, that day is still not here – yet.

In the past few days, Google’s John Mueller made some comments on machine or AI generated content basically knocking on the quality level of such content.

On Reddit he said this morning “nope” when asked “Are AI content writers good for creating blog posts or product review posts?” And on Twitter yesterday he said “as far as I can tell, most sites have trouble creating higher-quality content, they don’t need help creating low-quality content” when asked about using AI-based content creation tools to generate content.  Read More

#nlp