Sign In Start Free Trial
Account

Add to playlist

Create a Playlist

Modal Close icon
You need to login to use this feature.
  • Book Overview & Buying Mastering Predictive Analytics with scikit-learn and TensorFlow
  • Table Of Contents Toc
  • Feedback & Rating feedback
Mastering Predictive Analytics with scikit-learn and TensorFlow

Mastering Predictive Analytics with scikit-learn and TensorFlow

By : Alvaro Fuentes
close
close
Mastering Predictive Analytics with scikit-learn and TensorFlow

Mastering Predictive Analytics with scikit-learn and TensorFlow

By: Alvaro Fuentes

Overview of this book

Python is a programming language that provides a wide range of features that can be used in the field of data science. Mastering Predictive Analytics with scikit-learn and TensorFlow covers various implementations of ensemble methods, how they are used with real-world datasets, and how they improve prediction accuracy in classification and regression problems. This book starts with ensemble methods and their features. You will see that scikit-learn provides tools for choosing hyperparameters for models. As you make your way through the book, you will cover the nitty-gritty of predictive analytics and explore its features and characteristics. You will also be introduced to artificial neural networks and TensorFlow, and how it is used to create neural networks. In the final chapter, you will explore factors such as computational power, along with improvement methods and software enhancements for efficient predictive analytics. By the end of this book, you will be well-versed in using deep neural networks to solve common problems in big data analysis.
Table of Contents (7 chapters)
close
close

Summary

In this chapter, we learned about cross-validation, and different methods of cross-validation, including holdout cross-validation and k-fold cross-validation. We came to know that k-fold cross-validation is nothing but doing holdout cross-validation many times. We implemented k-fold cross-validation using the diamond dataset. We also compared different models using k-fold cross-validation and found the best-performing model, which was the random forest model.

Then, we discussed hyperparameter tuning. We came across the exhaustive grid-search method, which is used to perform hyperparameter tuning. We implemented hyperparameter tuning again using the diamond dataset. We also compared tuned and untuned models, and found that tuned parameters make the model perform better than untuned ones.

In the next chapter, we will study feature selection methods, dimensionality reduction...

Unlock full access

Continue reading for free

A Packt free trial gives you instant online access to our library of over 7000 practical eBooks and videos, constantly updated with the latest in tech

Create a Note

Modal Close icon
You need to login to use this feature.
notes
bookmark search playlist download font-size

Change the font size

margin-width

Change margin width

day-mode

Change background colour

Close icon Search
Country selected

Close icon Your notes and bookmarks

Delete Bookmark

Modal Close icon
Are you sure you want to delete it?
Cancel
Yes, Delete

Delete Note

Modal Close icon
Are you sure you want to delete it?
Cancel
Yes, Delete

Edit Note

Modal Close icon
Write a note (max 255 characters)
Cancel
Update Note

Confirmation

Modal Close icon
claim successful

Buy this book with your credits?

Modal Close icon
Are you sure you want to buy this book with one of your credits?
Close
YES, BUY