Is Interpreting ML Models a Dead-End?

The interpretation process can be detached from the model architecture

Models are nowadays our primary tool to understand phenomena around us, from the movement of the stars to the opinion and the behavior of social groups. With the explosion of machine learning (ML) theory and its technology, we have been equipped with the most powerful tool in the history of science to understand a phenomenon and predict its outcome under given conditions. By now, we are able to detect fraud, design transportation plans, and made progress on self-driving cars.

With the potential of machine learning to model a phenomenon, the problem of its complexity has stood over its democratization. While many models have the unquestionable ability to give us the predictions we are looking for, their use in many industries is still limited for reasons such as lack of computational power or limited software availability. One other reason, with little discussion as a limiting factor, is the claimed impossibility to interpret highly complex black-box or deep learning (DL) models. In this claim, many practitioners find themselves making a compromise for lower prediction accuracy with higher model interpretability. Read More

#model-interpretability

The Mythos of Model Interpretability

Supervised machine learning models boast remarkable predictive capabilities. But can you trust your model? Will it work in deployment? What else can it tell you about the world? We want models to be not only good, but interpretable . Read More

#model-interpretability