We’re Making Progress in Explainable AI, but Major Pitfalls Remain

Machine learning algorithms are starting to exceed human performance in many narrow and specific domains, such as image recognition and certain types of medical diagnoses. They’re also rapidly improving in more complex domains such as generating eerily human-like text. We increasingly rely on machine learning algorithms to make decisions on a wide range of topics, from what we collectively spend billions of hours watching to who gets the job.

But machine learning algorithms cannot explain the decisions they make. How can we justify putting these systems in charge of decisions that affect people’s lives if we don’t understand how they’re arriving at those decisions? Read More

#explainability