Book Image

Machine Learning with LightGBM and Python

By : Andrich van Wyk
3 (1)
Book Image

Machine Learning with LightGBM and Python

3 (1)
By: Andrich van Wyk

Overview of this book

Machine Learning with LightGBM and Python is a comprehensive guide to learning the basics of machine learning and progressing to building scalable machine learning systems that are ready for release. This book will get you acquainted with the high-performance gradient-boosting LightGBM framework and show you how it can be used to solve various machine-learning problems to produce highly accurate, robust, and predictive solutions. Starting with simple machine learning models in scikit-learn, you’ll explore the intricacies of gradient boosting machines and LightGBM. You’ll be guided through various case studies to better understand the data science processes and learn how to practically apply your skills to real-world problems. As you progress, you’ll elevate your software engineering skills by learning how to build and integrate scalable machine-learning pipelines to process data, train models, and deploy them to serve secure APIs using Python tools such as FastAPI. By the end of this book, you’ll be well equipped to use various -of-the-art tools that will help you build production-ready systems, including FLAML for AutoML, PostgresML for operating ML pipelines using Postgres, high-performance distributed training and serving via Dask, and creating and running models in the Cloud with AWS Sagemaker.
Table of Contents (17 chapters)
1
Part 1: Gradient Boosting and LightGBM Fundamentals
6
Part 2: Practical Machine Learning with LightGBM
10
Part 3: Production-ready Machine Learning with LightGBM

Building a LightGBM ML pipeline with Amazon SageMaker

The dataset we’ll use for our case study of building a SageMaker pipeline is the Census Income dataset from Chapter 4, Comparing LightGBM, XGBoost, and Deep Learning. This dataset is also available as a SageMaker sample dataset, so it’s easy to work with on SageMaker if you are getting started.

The pipeline we’ll build consists of the following steps:

  1. Data preprocessing.
  2. Model training and tuning.
  3. Model evaluation.
  4. Bias and explainability checks using Clarify.
  5. Model registration within SageMaker.
  6. Model deployment using an AWS Lambda.

Here’s a graph showing the complete pipeline:

Figure 9.2 – SageMaker ML pipeline for Census Income classification

Figure 9.2 – SageMaker ML pipeline for Census Income classification

Our approach is to create the entire pipeline using a Jupyter Notebook running in SageMaker Studio. The sections that follow explain and go through the code for each pipeline step, starting...