Sign In Start Free Trial
Account

Add to playlist

Create a Playlist

Modal Close icon
You need to login to use this feature.
  • Debugging Machine Learning Models with Python
  • Toc
  • feedback
Debugging Machine Learning Models with Python

Debugging Machine Learning Models with Python

By : Ali Madani
4.9 (16)
close
Debugging Machine Learning Models with Python

Debugging Machine Learning Models with Python

4.9 (16)
By: Ali Madani

Overview of this book

Debugging Machine Learning Models with Python is a comprehensive guide that navigates you through the entire spectrum of mastering machine learning, from foundational concepts to advanced techniques. It goes beyond the basics to arm you with the expertise essential for building reliable, high-performance models for industrial applications. Whether you're a data scientist, analyst, machine learning engineer, or Python developer, this book will empower you to design modular systems for data preparation, accurately train and test models, and seamlessly integrate them into larger technologies. By bridging the gap between theory and practice, you'll learn how to evaluate model performance, identify and address issues, and harness recent advancements in deep learning and generative modeling using PyTorch and scikit-learn. Your journey to developing high quality models in practice will also encompass causal and human-in-the-loop modeling and machine learning explainability. With hands-on examples and clear explanations, you'll develop the skills to deliver impactful solutions across domains such as healthcare, finance, and e-commerce.
Table of Contents (26 chapters)
close
1
Part 1:Debugging for Machine Learning Modeling
5
Part 2:Improving Machine Learning Models
10
Part 3:Low-Bug Machine Learning Development and Deployment
15
Part 4:Deep Learning Modeling
19
Part 5:Advanced Topics in Model Debugging

Reviewing why having explainability is not enough

Explainability helps us build trust for the users of our models. As you learned in this chapter, you can use explainability techniques to understand how your models generate the outputs for one or multiple instances in a dataset. These explanations could help in improving our models from a performance and fairness perspective. However, we cannot achieve such improvements by simply using these techniques blindly and generating some results in Python. For example, as we discussed in the Counterfactual generation using Diverse Counterfactual Explanations (DiCE) section, some of the generated counterfactuals might not be reasonable and meaningful and we cannot rely on them. Or, when generating local explanations for one or multiple data points using SHAP or LIME, we need to pay attention to the meaning of features, the range of values for each feature and the meaning behind them, and the characteristics of each data point we investigate...

bookmark search playlist download font-size

Change the font size

margin-width

Change margin width

day-mode

Change background colour

Close icon Search
Country selected

Close icon Your notes and bookmarks

Delete Bookmark

Modal Close icon
Are you sure you want to delete it?
Cancel
Yes, Delete