
Interpretable Machine Learning with Python
By :

The mission of this chapter was twofold, as outlined here:
Regarding the first goal, we have produced four models with bias mitigation methods that are objectively fairer than the base model, according to four fairness metrics (SPD, DI, AOD, EOD)—when comparing privileged and underprivileged age groups. However, only two of these models are intersectionally fairer using both age group and gender, according to DFBA (see Figure 11.7). We can still improve fairness significantly by combining methods, yet any one of the four models improves the base model.
As for the second goal, the causal inference framework determined that any of the policies tested is better than no policy for both parties. Hooray! However, it yielded estimates that didn't establish...