Often when machine learning models become complex, even data scientists struggle to explain how the model produces its outputs. This results in compromised data science project that suffer from
• A lack of trust from stakeholders,
• Difficulty complying with government regulations — particularly in financial services — that require risk and trading models to be documented and explained,
• Reluctance to implement major business process changes based on insights from models that the stakeholders don’t understand.
In this white paper, Eszter Windhager-Pokol, Starschema head of data science, explains the concept of interpretability, why it is important and several methods to address the problems posed by black box models.