Could this software help users trust machine learning decisions?

New software developed by BAE Systems could help the Department of Defense build confidence in decisions and intelligence produced by machine learning algorithms, the company claims.

BAE Systems said it recently delivered its new MindfuL software program to the Defense Advanced Research Projects Agency in a July 14 announcement. Developed in collaboration with the Massachusetts Institute of Technology’s Computer Science and Artificial Intelligence Laboratory, the software is designed to increase transparency in machine learning systems—artificial intelligence algorithms that learn and change over time as they are fed ever more data—by auditing them to provide insights about how it reached its decisions. Read More

#dod, #explainability

Democratizing artificial intelligence is a double-edged sword

When company leaders talk about democratizing artificial intelligence (AI), it’s easy to imagine what they have in mind. The more people with access to the raw materials of the knowledge, tools, and data required to build an AI system, the more innovations are bound to emerge. Efficiency improves and engagement increases. Faced with a shortage of technical talent? Microsoft, Amazon, and Google have all released premade drag-and-drop or no-code AI tools that allow people to integrate AI into applications without needing to know how to build machine learning (ML) models. Read More

#bias, #explainability

The two-year fight to stop Amazon from selling face recognition to the police

In the summer of 2018, nearly 70 civil rights and research organizations wrote a letter to Jeff Bezos demanding that Amazon stop providing face recognition technology to governments. As part of an increased focus on the role that tech companies were playing in enabling the US government’s tracking and deportation of immigrants, it called on Amazon to “stand up for civil rights and civil liberties.” “As advertised,” it said, “Rekognition is a powerful surveillance system readily available to violate rights and target communities of color.”

Along with the letter, the American Civil Liberties Union (ACLU) of Washington delivered over 150,000 petition signatures as well as another letter from the company’s own shareholders expressing similar demands. A few days later, Amazon’s employees echoed the concerns in an internal memo.

Despite the mounting pressure, Amazon continued with business as usual. Read More

#bias, #explainability, #image-recognition

XAI—Explainable artificial intelligence

Explainability is essential for users to effectively understand, trust, and manage powerful artificial intelligence applications.

Recent successes in machine learning (ML) have led to a new wave of artificial intelligence (AI) applications that offer extensive benefits to a diverse range of fields. However, many of these systems are not able to explain their autonomous decisions and actions to human users. Explanations may not be essential for certain AI applications, and some AI researchers argue that the emphasis on explanation is misplaced, too difficult to achieve, and perhaps unnecessary. However, for many critical applications in defense, medicine, finance, and law, explanations are essential for users to understand, trust, and effectively manage these new, artificially intelligent partners [see recent reviews (13)].

Recent AI successes are largely attributed to new ML techniques that construct models in their internal representations. These include support vector machines (SVMs), random forests, probabilistic graphical models, reinforcement learning (RL), and deep learning (DL) neural networks. Although these models exhibit high performance, they are opaque in terms of explainability. There may be inherent conflict between ML performance (e.g., predictive accuracy) and explainability. Often, the highest performing methods (e.g., DL) are the least explainable, and the most explainable (e.g., decision trees) are the least accurate. Figure 1 illustrates this with a notional graph of the performance-explainability tradeoff for some of the ML techniques. Read More

#explainability

AI Explainability (Google Whitepaper)

Systems built around AI will affect and, in many cases, redefine medical interventions, autonomous transportation, criminal justice, financial risk management and many other areas of society.However, considering the challenges , the usefulness and fairness of these AI systems will be gated by our ability to understand, explain and control them. Read More

#explainability

Increasing transparency with Google Cloud Explainable AI

June marked the first anniversary of Google’s AI Principles, which formally outline our pledge to explore the potential of AI in a respectful, ethical and socially beneficial way. For Google Cloud, they also serve as an ongoing commitment to our customers—the tens of thousands of businesses worldwide who rely on Google Cloud AI every day—to deliver the transformative capabilities they need to thrive while aiming to help improve privacy, security, fairness, and the trust of their users.

We strive to build AI aligned with our AI Principles and we’re excited to introduce Explainable AI, which helps humans understand how a machine learning model reaches its conclusions. Read More

#explainability

We’re Making Progress in Explainable AI, but Major Pitfalls Remain

Machine learning algorithms are starting to exceed human performance in many narrow and specific domains, such as image recognition and certain types of medical diagnoses. They’re also rapidly improving in more complex domains such as generating eerily human-like text. We increasingly rely on machine learning algorithms to make decisions on a wide range of topics, from what we collectively spend billions of hours watching to who gets the job.

But machine learning algorithms cannot explain the decisions they make. How can we justify putting these systems in charge of decisions that affect people’s lives if we don’t understand how they’re arriving at those decisions? Read More

#explainability

New Theory Cracks Open the Black Box of Deep Learning

Even as machines known as “deep neural networks” have learned to converse, drive cars, beat video games and Go champions, dream, paint pictures and help make scientific discoveries, they have also confounded their human creators, who never expected so-called “deep-learning” algorithms to work so well. No underlying principle has guided the design of these learning systems, other than vague inspiration drawn from the architecture of the brain (and no one really understands how that operates either).

… Last month, a YouTube video of a conference talk in Berlin, shared widely among artificial-intelligence researchers, offered a possible answer. In the talk, Naftali Tishby, a computer scientist and neuroscientist from the Hebrew University of Jerusalem, presented evidence in support of a new theory explaining how deep learning works. Read More

#explainability

Deep Learning Explainability: Hints from Physics

Nowadays, artificial intelligence is present in almost every part of our lives. Smartphones, social media feeds, recommendation engines, online ad networks, and navigation tools are some examples of AI-based applications that already affect us every day. Deep learning in areas such as speech recognitionautonomous drivingmachine translation, and visual object recognition has been systematically improving the state of the art for a while now.

However, the reasons that make deep neural networks (DNN) so powerful are only heuristically understood, i.e. we know only from experience that we can achieve excellent results by using large datasets and following specific training protocols. Recently, one possible explanation was proposed, based on a remarkable analogy between a physics-based conceptual framework called renormalization group (RG) and a type of neural network known as a restricted Boltzmann machine (RBM). Read More

#explainability

Trust, control and personalization through human-centric AI

Our virtual lives lie in the hands of algorithms that govern what we see and don’t see, how we perceive the world and which life choices we make. Artificial intelligence decides which movies are of interest to you, how your social media feeds should look like, and which advertisements have the highest likelihood of convincing you. These algorithms are either controlled by corporations or by governments, each of which tend to have goals that differ from the individual’s objectives.

In this article, we dive into the world of human-centric AI, leading to a new era where the individual not only controls the data, but also steers the algorithms to ensure fairness, privacy and trust. Breaking free from filter bubbles and detrimental echo chambers that skew the individual’s worldview allows the user to truly benefit from today’s AI revolution.

While the devil is in the implementation and many open questions still remain, the main purpose of this think piece is to spark a discussion and lay out a vision of how AI can be employed in a human-centric way. Read More

#explainability, #trust