Sign In Start Free Trial
Account

Add to playlist

Create a Playlist

Modal Close icon
You need to login to use this feature.
  • Book Overview & Buying Mastering Predictive Analytics with scikit-learn and TensorFlow
  • Table Of Contents Toc
  • Feedback & Rating feedback
Mastering Predictive Analytics with scikit-learn and TensorFlow

Mastering Predictive Analytics with scikit-learn and TensorFlow

By : Alvaro Fuentes
close
close
Mastering Predictive Analytics with scikit-learn and TensorFlow

Mastering Predictive Analytics with scikit-learn and TensorFlow

By: Alvaro Fuentes

Overview of this book

Python is a programming language that provides a wide range of features that can be used in the field of data science. Mastering Predictive Analytics with scikit-learn and TensorFlow covers various implementations of ensemble methods, how they are used with real-world datasets, and how they improve prediction accuracy in classification and regression problems. This book starts with ensemble methods and their features. You will see that scikit-learn provides tools for choosing hyperparameters for models. As you make your way through the book, you will cover the nitty-gritty of predictive analytics and explore its features and characteristics. You will also be introduced to artificial neural networks and TensorFlow, and how it is used to create neural networks. In the final chapter, you will explore factors such as computational power, along with improvement methods and software enhancements for efficient predictive analytics. By the end of this book, you will be well-versed in using deep neural networks to solve common problems in big data analysis.
Table of Contents (7 chapters)
close
close

Cross-validation and Parameter Tuning

Predictive analytics is about making predictions for unknown events. We use it to produce models that generalize data. For this, we use a technique called cross-validation.

Cross-validation is a validation technique for assessing the result of a statistical analysis that generalizes to an independent dataset that gives a measure of out-of-sample accuracy. It achieves the task by averaging over several random partitions of the data into training and test samples. It is often used for hyperparameter tuning by doing cross-validation for several possible values of a parameter and choosing the parameter value that gives the lowest cross-validation average error.

There are two kinds of cross-validation: exhaustive and non-exhaustive. K-fold is an example of non-exhaustive cross-validation. It is a technique for getting a more accurate assessment...

Unlock full access

Continue reading for free

A Packt free trial gives you instant online access to our library of over 7000 practical eBooks and videos, constantly updated with the latest in tech

Create a Note

Modal Close icon
You need to login to use this feature.
notes
bookmark search playlist download font-size

Change the font size

margin-width

Change margin width

day-mode

Change background colour

Close icon Search
Country selected

Close icon Your notes and bookmarks

Delete Bookmark

Modal Close icon
Are you sure you want to delete it?
Cancel
Yes, Delete

Delete Note

Modal Close icon
Are you sure you want to delete it?
Cancel
Yes, Delete

Edit Note

Modal Close icon
Write a note (max 255 characters)
Cancel
Update Note

Confirmation

Modal Close icon
claim successful

Buy this book with your credits?

Modal Close icon
Are you sure you want to buy this book with one of your credits?
Close
YES, BUY