Artificial Intelligence: An Accountability Framework for Federal Agencies and Other Entities

As a nation, we have yet to grasp the full benefits or unwanted effects of artificial intelligence. AI is widely used, but how do we know it’s working appropriately?

This report identifies key accountability practices—centered around the principles of governance, data, performance, and monitoring—to help federal agencies and others use AI responsibly. For example, the governance principle calls for users to set clear goals and engage with diverse stakeholders.

To develop these practices, we held a forum on AI oversight with experts from government, industry, and nonprofits. We also interviewed federal inspector general officials and AI experts. Read More

#dod, #ic, #trust

NIST Proposes Method for Evaluating User Trust in Artificial Intelligence Systems

Can trust, one of the primary bases of relationships throughout history, be quantified and measured?

Illustration shows how people evaluating two different tasks performed by AI -- music selection and medical diagnosis -- might trust the AI varying amounts because the risk level of each task is different.

Every time you speak to a virtual assistant on your smartphone, you are talking to an artificial intelligence — an AI that can, for example, learn your taste in music and make song recommendations that improve based on your interactions. However, AI also assists us with more risk-fraught activities, such as helping doctors diagnose cancer. These are two very different scenarios, but the same issue permeates both: How do we humans decide whether or not to trust a machine’s recommendations?

This is the question that a new draft publication from the National Institute of Standards and Technology (NIST) poses, with the goal of stimulating a discussion about how humans trust AI systems. The document, Artificial Intelligence and User Trust (NISTIR 8332), is open for public comment until July 30, 2021. Read More

#nist, #trust

Podcast: What’s AI doing in your wallet?

Tech giants are moving into our wallets—bringing AI and big questions with them.

Our entire financial system is built on trust. We can exchange otherwise worthless paper bills for fresh groceries, or swipe a piece of plastic for new clothes. But this trust—typically in a central-government-backed bank—is changing. As our financial lives are rapidly digitized, the resulting data turns into fodder for AI. Companies like Apple, Facebook, and Google see it as an opportunity to disrupt the entire experience of how people think about and engage with their money. But will we as consumers really get more control over our finances? In this first of a series on automation and our wallets, we explore a digital revolution in how we pay for things. Read More

#podcasts, #trust

Promoting the Use of Trustworthy Artificial Intelligence in Government


Artificial intelligence promises to drive the growth of the United States economy and improve the quality of life of all Americans.

On December 3, 2020, President Donald J. Trump signed the Executive Order on Promoting the Use of Trustworthy Artificial Intelligence in the Federal Government, which establishes guidance for Federal agency adoption of Artificial Intelligence (AI) to more effectively deliver services to the American people and foster public trust in this critical technology. Read More

#dod, #ic, #trust

Ethical AI isn’t the same as trustworthy AI, and that matters

Artificial intelligence (AI) solutions are facing increased scrutiny due to their aptitude for amplifying both good and bad decisions. More specifically, for their propensity to expose and heighten existing societal biases and inequalities. It is only right, then, that discussions of ethics are taking center stage as AI adoption increases.

In lockstep with ethics comes the topic of trust. Ethics are the guiding rules for the decisions we make and actions we take. These rules of conduct reflect our core beliefs about what is right and fair. Trust, on the other hand, reflects our belief that another person — or company — is reliable, has integrity and will behave in the manner we expect. Ethics and trust are discrete, but often mutually reinforcing, concepts.

So is an ethical AI solution inherently trustworthy? Read More

#ethics, #trust

Deep Evidential Regression

Deterministic neural networks (NNs) are increasingly being deployed in safety critical domains, where calibrated, robust, and efficient measures of uncertainty are crucial. In this paper, we propose a novel method for training non-Bayesian NNs to estimate a continuous target as well as its associated evidence in order to learn both aleatoric and epistemic uncertainty. We accomplish this by placing evidential priors over the original Gaussian likelihood function and training the NN to infer the hyperparameters of the evidential distribution. We additionally impose priors during training such that the model is regularized when its predicted evidence is not aligned with the correct output. Our method does not rely on sampling during inference or on out-of-distribution (OOD) examples for training, thus enabling efficient and scalable uncertainty learning. We demonstrate learning well-calibrated measures of uncertainty on various benchmarks, scaling to complex computer vision tasks, as well as robustness to adversarial and OOD test samples. Read More

#trust

New study shows trust levels in artificial intelligence predicted, boosted by people’s relationship style

A University of Kansas interdisciplinary team led by relationship psychologist Omri Gillath has published a new paper in the journal Computers in Human Behavior showing people’s trust in artificial intelligence (AI) is tied to their relationship or attachment style.

The research indicates for the first time that people who are anxious about their relationships with humans tend to have less trust in AI as well. Importantly, the research also suggests trust in artificial intelligence can be increased by reminding people of their secure relationships with other humans. Read More

#trust

When governments turn to AI: Algorithms, trade-offs, and trust

Artificial intelligence can help government agencies solve complex public-sector problems. For those that are new at it, here are five factors that can affect the benefits and risks.

As artificial intelligence (AI) and machine learning gain momentum, an increasing number of government agencies are considering or starting to use them to improve decision making. Additionally, COVID-19 has suddenly put an emphasis on speed. In these uncharted waters, where the tides continue to shift, it’s not surprising that analytics, widely recognized for its problem-solving and predictive prowess, has become an essential navigational tool. Some examples of compelling applications include those that identify tax-evasion patterns, sort through infrastructure data to target bridge inspections, or sift through health and social-service data to prioritize cases for child welfare and support, or predicting the spread of infectious diseases. They enable governments to perform more efficiently, both improving outcomes and keeping costs down. Read More

#trust, #explainability

There Is Hope After All: Quantifying Opinion and Trustworthiness in Neural Networks

Artificial Intelligence (AI) plays a fundamental role in the modern world, especially when used as an autonomous decision maker. One common concern nowadays is “how trustworthy the AIs are.” Human operators follow a strict educational curriculum and performance assessment that could be exploited to quantify how much we entrust them. To quantify the trust of AI decision makers, we must go beyond task accuracy especially when facing limited, incomplete, misleading, controversial or noisy datasets. Toward addressing these challenges, we describe DeepTrust, a Subjective Logic (SL) inspired framework that constructs a probabilistic logic description of an AI algorithm and takes into account the trustworthiness of both dataset and inner algorithmic workings. DeepTrust identifies proper multi-layered neural network (NN) topologies that have high projected trust probabilities, even when trained with untrusted data. We show that uncertain opinion of data is not always malicious while evaluating NN’s opinion and trustworthiness, whereas the disbelief opinion hurts trust the most. Also trust probability does not necessarily correlate with accuracy. DeepTrust also provides a projected trust probability of NN’s prediction, which is useful when the NN generates an over-confident output under problematic datasets. These findings open new analytical avenues for designing and improving the NN topology by optimizing opinion and trustworthiness, along with accuracy, in a multi-objective optimization formulation, subject to space and time constraints. Read More

#explainability, #trust

Introducing the Model Card Toolkit for Easier Model Transparency Reporting

Machine learning (ML) model transparency is important across a wide variety of domains that impact peoples’ lives, from healthcare to personal finance to employment. The information needed by downstream users will vary, as will the details that developers need in order to decide whether or not a model is appropriate for their use case. This desire for transparency led us to develop a new tool for model transparency, Model Cards, which provide a structured framework for reporting on ML model provenance, usage, and ethics-informed evaluation and give a detailed overview of a model’s suggested uses and limitations that can benefit developers, regulators, and downstream users alike.

Over the past year, we’ve launched Model Cards publicly and worked to create Model Cards for open-source models released by teams across Google. Read More

#big7, #devops, #trust