
Interpretable Machine Learning with Python
By :

In a nutshell, traditional interpretation methods only cover surface-level questions about your models such as the following:
These questions are very limiting if you are trying to understand not only whether your model works but why and how?
This gap in understanding can lead to unexpected issues with your model that won't necessarily be immediately apparent. Let's consider that models, once deployed, are not static but dynamic. They face different challenges than they did in the "lab" when you were training them. They may face not only performance issues but issues with bias such as imbalance with underrepresented classes, or security with adversarial attacks. Realizing that...