Sign In Start Free Trial
Account

Add to playlist

Create a Playlist

Modal Close icon
You need to login to use this feature.
  • Book Overview & Buying Hands-On Ensemble Learning with Python
  • Table Of Contents Toc
  • Feedback & Rating feedback
Hands-On Ensemble Learning with Python

Hands-On Ensemble Learning with Python

By : Kyriakides, Margaritis
close
close
Hands-On Ensemble Learning with Python

Hands-On Ensemble Learning with Python

By: Kyriakides, Margaritis

Overview of this book

Ensembling is a technique of combining two or more similar or dissimilar machine learning algorithms to create a model that delivers superior predictive power. This book will demonstrate how you can use a variety of weak algorithms to make a strong predictive model. With its hands-on approach, you'll not only get up to speed with the basic theory but also the application of different ensemble learning techniques. Using examples and real-world datasets, you'll be able to produce better machine learning models to solve supervised learning problems such as classification and regression. In addition to this, you'll go on to leverage ensemble learning techniques such as clustering to produce unsupervised machine learning models. As you progress, the chapters will cover different machine learning algorithms that are widely used in the practical world to make predictions and classifications. You'll even get to grips with the use of Python libraries such as scikit-learn and Keras for implementing different ensemble models. By the end of this book, you will be well-versed in ensemble learning, and have the skills you need to understand which ensemble method is required for which problem, and successfully implement them in real-world scenarios.
Table of Contents (20 chapters)
close
close
Free Chapter
1
Section 1: Introduction and Required Software Tools
4
Section 2: Non-Generative Methods
7
Section 3: Generative Methods
11
Section 4: Clustering
13
Section 5: Real World Applications

Summary

In this chapter, we presented the concepts of bias and variance, as well as the trade-off between them. They are essential in understanding how and why a model may under-perform, either in-sample or out-of-sample. We then introduced the concept and motivation of ensemble learning, how to identify bias and variance in models, as well as basic categories of ensemble learning methods. We presented ways to measure and plot bias and variance, using scikit-learn and matplotlib. Finally, we talked about the difficulties and drawbacks of implementing ensemble learning methods. Some key points to remember are the following.

High-bias models usually have difficulty performing well in-sample. This is also called underfitting. It is due to the model's simplicity (or lack of complexity). High-variance models usually have difficulty generalizing or performing well out-of-sample...

Unlock full access

Continue reading for free

A Packt free trial gives you instant online access to our library of over 7000 practical eBooks and videos, constantly updated with the latest in tech

Create a Note

Modal Close icon
You need to login to use this feature.
notes
bookmark search playlist download font-size

Change the font size

margin-width

Change margin width

day-mode

Change background colour

Close icon Search
Country selected

Close icon Your notes and bookmarks

Delete Bookmark

Modal Close icon
Are you sure you want to delete it?
Cancel
Yes, Delete

Delete Note

Modal Close icon
Are you sure you want to delete it?
Cancel
Yes, Delete

Edit Note

Modal Close icon
Write a note (max 255 characters)
Cancel
Update Note

Confirmation

Modal Close icon
claim successful

Buy this book with your credits?

Modal Close icon
Are you sure you want to buy this book with one of your credits?
Close
YES, BUY