
Interpretable Machine Learning with Python
By :

We have briefly touched on this topic before, but high performance often requires complexity, and complexity inhibits interpretability. As studied in Chapter 2, Key Concepts of Interpretability, this complexity comes from primarily three sources: non-linearity, non-monotonicity, and interactivity. If the model adds any complexity, it is compounded by the number and nature of features in your dataset, which by itself is a source of complexity.
These special properties can help make a model more interpretable.
In Chapter 1, Interpretation, Interpretability, and Explainability; and Why Does It All Matter?, we discussed why being able to look under the hood of the model and intuitively understand how all its moving parts derive its predictions in a consistent manner is, mostly, what separates explainability from interpretability. This property is also...