Sign In Start Free Trial
Account

Add to playlist

Create a Playlist

Modal Close icon
You need to login to use this feature.
  • Essential Statistics for Non-STEM Data Analysts
  • Toc
  • feedback
Essential Statistics for Non-STEM Data Analysts

Essential Statistics for Non-STEM Data Analysts

By : Li
4.6 (10)
close
Essential Statistics for Non-STEM Data Analysts

Essential Statistics for Non-STEM Data Analysts

4.6 (10)
By: Li

Overview of this book

Statistics remain the backbone of modern analysis tasks, helping you to interpret the results produced by data science pipelines. This book is a detailed guide covering the math and various statistical methods required for undertaking data science tasks. The book starts by showing you how to preprocess data and inspect distributions and correlations from a statistical perspective. You’ll then get to grips with the fundamentals of statistical analysis and apply its concepts to real-world datasets. As you advance, you’ll find out how statistical concepts emerge from different stages of data science pipelines, understand the summary of datasets in the language of statistics, and use it to build a solid foundation for robust data products such as explanatory models and predictive models. Once you’ve uncovered the working mechanism of data science algorithms, you’ll cover essential concepts for efficient data collection, cleaning, mining, visualization, and analysis. Finally, you’ll implement statistical methods in key machine learning tasks such as classification, regression, tree-based methods, and ensemble learning. By the end of this Essential Statistics for Non-STEM Data Analysts book, you’ll have learned how to build and present a self-contained, statistics-backed data product to meet your business goals.
Table of Contents (19 chapters)
close
1
Section 1: Getting Started with Statistics for Data Science
5
Section 2: Essentials of Statistical Analysis
10
Section 3: Statistics for Machine Learning
15
Section 4: Appendix

Understanding and using the boosting module

Unlike bagging, which focuses on reducing variance, the goal of boosting is to reduce bias without increasing variance.

Bagging creates a bunch of base estimators with equal importance, or weights, in terms of determining the final prediction. The data that's fed into the base estimators is also uniformly resampled from the training set.

Determining the possibility of parallel processing

From the description of bagging we provided, you may imagine that it is relatively easy to run bagging algorithms. Each process can independently perform sampling and model training. Aggregation is only performed at the last step, when all the base estimators have been trained. In the preceding code snippet, I set n_jobs = 20 to build the bagging classifier. When it is being trained, 20 cores on the host machine will be used at most.

Boosting solves a different problem. The primary goal is to create an estimator with low bias. In the world...

bookmark search playlist download font-size

Change the font size

margin-width

Change margin width

day-mode

Change background colour

Close icon Search
Country selected

Close icon Your notes and bookmarks

Delete Bookmark

Modal Close icon
Are you sure you want to delete it?
Cancel
Yes, Delete