Deeper Neural Networks Lead to Simpler Embeddings

Recent research is increasingly investigating how neural networks, being as hyper-parametrized as they are, generalize. That is, according to traditional statistics, the more parameters, the more the model overfits. This notion is directly contradicted by a fundamental axiom of deep learning: Increased parametrization improves generalization.

Although it may not be explicitly stated anywhere, it’s the intuition behind why researchers continue to push models larger to make them more powerful.

There have been many efforts to explain exactly why this is so. Most are quite interesting; the recently proposed Lottery Ticket Hypothesis states that neural networks are just giant lotteries finding the best subnetwork, and another paper suggests through theoretical proof that such phenomenon is built into the nature of deep learning.

Perhaps one of the most intriguing, though, is one proposing that deeper neural networks lead to simpler embeddings. Alternatively, this is known as the “simplicity bias” — neural network parameters have a bias towards simpler mappings. Read More

#explainability

InShort: Occlusion Analysis for Explaining DNNs

There are abundantly many explanation methods for explaining deep neural networks (DNNs), each with its advantages and disadvantages. In most cases, we are interested in local explanation methods, i.e. explanations of the network’s output for a particular input, because DNNs tend to be too complex to be explained globally (independent of an input).

… In this short article, I will present one fundamental attribution technique: occlusion analysis. The basic concept is as simple as they come: For every input dimension of an input x, we evaluate the model with that dimension missing, and observe how the output changes. Read More

#explainability

How explainable artificial intelligence can help humans innovate

The field of artificial intelligence (AI) has created computers that can drive cars, synthesize chemical compounds, fold proteins and detect high-energy particles at a superhuman level.

However, these AI algorithms cannot explain the thought processes behind their decisions. A computer that masters protein folding and also tells researchers more about the rules of biology is much more useful than a computer that folds proteins without explanation.

Therefore, AI researchers like me are now turning our efforts toward developing AI algorithms that can explain themselves in a manner that humans can understand. Read More

#explainability

When governments turn to AI: Algorithms, trade-offs, and trust

Artificial intelligence can help government agencies solve complex public-sector problems. For those that are new at it, here are five factors that can affect the benefits and risks.

As artificial intelligence (AI) and machine learning gain momentum, an increasing number of government agencies are considering or starting to use them to improve decision making. Additionally, COVID-19 has suddenly put an emphasis on speed. In these uncharted waters, where the tides continue to shift, it’s not surprising that analytics, widely recognized for its problem-solving and predictive prowess, has become an essential navigational tool. Some examples of compelling applications include those that identify tax-evasion patterns, sort through infrastructure data to target bridge inspections, or sift through health and social-service data to prioritize cases for child welfare and support, or predicting the spread of infectious diseases. They enable governments to perform more efficiently, both improving outcomes and keeping costs down. Read More

#trust, #explainability

There Is Hope After All: Quantifying Opinion and Trustworthiness in Neural Networks

Artificial Intelligence (AI) plays a fundamental role in the modern world, especially when used as an autonomous decision maker. One common concern nowadays is “how trustworthy the AIs are.” Human operators follow a strict educational curriculum and performance assessment that could be exploited to quantify how much we entrust them. To quantify the trust of AI decision makers, we must go beyond task accuracy especially when facing limited, incomplete, misleading, controversial or noisy datasets. Toward addressing these challenges, we describe DeepTrust, a Subjective Logic (SL) inspired framework that constructs a probabilistic logic description of an AI algorithm and takes into account the trustworthiness of both dataset and inner algorithmic workings. DeepTrust identifies proper multi-layered neural network (NN) topologies that have high projected trust probabilities, even when trained with untrusted data. We show that uncertain opinion of data is not always malicious while evaluating NN’s opinion and trustworthiness, whereas the disbelief opinion hurts trust the most. Also trust probability does not necessarily correlate with accuracy. DeepTrust also provides a projected trust probability of NN’s prediction, which is useful when the NN generates an over-confident output under problematic datasets. These findings open new analytical avenues for designing and improving the NN topology by optimizing opinion and trustworthiness, along with accuracy, in a multi-objective optimization formulation, subject to space and time constraints. Read More

#explainability, #trust

Google researchers investigate how transfer learning works

Transfer learning’s ability to store knowledge gained while solving a problem and apply it to a related problem has attracted considerable attention. But despite recent breakthroughs, no one fully understands what enables a successful transfer and which parts of algorithms are responsible for it.

That’s why Google researchers sought to develop analysis techniques tailored to explainability challenges in transfer learning. In a new paper, they say their contributions help clear up a few of the mysteries around why machine learning models transfer successfully — or fail to. Read More

#transfer-learning, #explainability

China’s AI tech leaves aside questions of ethics

Artificial intelligence, like other forms of technology, reflects the culture and values of the people who create it and those who provide the data frameworks upon which it is built. AI technology developed in different countries or organizations may thus offer different answers to the same problem. Read More

#china-ai, #explainability

NIST Asks AI to Explain Itself

It’s a question that many of us encounter in childhood: “Why did you do that?” As artificial intelligence (AI) begins making more consequential decisions that affect our lives, we also want these machines to be capable of answering that simple yet profound question. After all, why else would we trust AI’s decisions?

This desire for satisfactory explanations has spurred scientists at the National Institute of Standards and Technology (NIST) to propose a set of principles by which we can judge how explainable AI’s decisions are. Their draft publication, Four Principles of Explainable Artificial Intelligence (Draft NISTIR 8312), is intended to stimulate a conversation about what we should expect of our decision-making devices.  Read More

#explainability

Democratization of AI

When company leaders talk about democratizing artificial intelligence (AI), it’s not difficult to imagine what they have in mind. The more people with access to the raw materials of knowledge, tools, and data required to build an AI system, the more innovations that are bound to emerge. Efficiency improves and engagement increases. Faced with a shortage of technical talent? Microsoft, Amazon, and Google have all released premade, drag-and-drop or no-code AI tools that allow people to integrate AI into applications without needing to know how to build machine learning models.

But as companies move toward democratization, a cautionary tale is emerging. Even the most sophisticated AI systems, designed by highly qualified engineers, can fall victim to bias, explainability issues, and other flaws. Read More

#bias, #explainability

Explainable AI: A guide for making black box machine learning models explainable

In the future, AI will explain itself, and interpretability could boost machine intelligence research. Getting started with the basics is a good way to get there, and Christoph Molnar’s book is a good place to start.

Christoph Molnar is a data scientist and PhD candidate in interpretable machine learning. Molnar has written the book “Interpretable Machine Learning: A Guide for Making Black Box Models Explainable”, in which he elaborates on the issue and examines methods for achieving explainability. Read More

#explainability