-
Book Overview & Buying
-
Table Of Contents
-
Feedback & Rating

Responsible AI in the Enterprise
By :

A taxonomy is a system for classifying things: the benefit of building a taxonomy is that it helps us to understand and organize information in a useful manner. Due to the vast amount of research interest in the area of ML explainability, you will encounter different taxonomies around ML interpretability methods, as well as a variety of terms. Let’s get some of the fundamental terms explained before moving forward.
So far, we have established that an ML explainability method is a way of understanding how an ML model works. The benefit of different types of model interpretability methods is that they can help us to understand the behavior of complex ML models. To build upon this mental model of model interpretability, we can divide it into four distinct types.
Change the font size
Change margin width
Change background colour