Opening the Black Box | Starschema

Opening the Black Box

Often when machine learning models become complex, even data scientists struggle to explain how the model produces its outputs. This results in compromised data science project that suffer from

• A lack of trust from stakeholders,
• Difficulty complying with government regulations — particularly in financial services — that require risk and trading models to be documented and explained,
• Reluctance to implement major business process changes based on insights from models that the stakeholders don’t understand.

In this white paper, Eszter Windhager-Pokol, Starschema head of data science, explains the concept of interpretability, why it is important and several methods to address the problems posed by black box models.

SCROLL

This website uses cookies

To provide you with the best possible experience on our website, we may use cookies, as described here. By clicking accept, closing this banner, or continuing to browse our websites, you consent to the use of such cookies.

I agree