Sign In Start Free Trial
Account

Add to playlist

Create a Playlist

Modal Close icon
You need to login to use this feature.
  • Machine Learning for OpenCV 4
  • Toc
  • feedback
Machine Learning for OpenCV 4

Machine Learning for OpenCV 4

By : Sharma, Michael Beyeler (USD), Vishwesh Ravi Shrimali , Michael Beyeler
close
Machine Learning for OpenCV 4

Machine Learning for OpenCV 4

By: Sharma, Michael Beyeler (USD), Vishwesh Ravi Shrimali , Michael Beyeler

Overview of this book

OpenCV is an opensource library for building computer vision apps. The latest release, OpenCV 4, offers a plethora of features and platform improvements that are covered comprehensively in this up-to-date second edition. You'll start by understanding the new features and setting up OpenCV 4 to build your computer vision applications. You will explore the fundamentals of machine learning and even learn to design different algorithms that can be used for image processing. Gradually, the book will take you through supervised and unsupervised machine learning. You will gain hands-on experience using scikit-learn in Python for a variety of machine learning applications. Later chapters will focus on different machine learning algorithms, such as a decision tree, support vector machines (SVM), and Bayesian learning, and how they can be used for object detection computer vision operations. You will then delve into deep learning and ensemble learning, and discover their real-world applications, such as handwritten digit classification and gesture recognition. Finally, you’ll get to grips with the latest Intel OpenVINO for building an image processing system. By the end of this book, you will have developed the skills you need to use machine learning for building intelligent computer vision applications with OpenCV 4.
Table of Contents (18 chapters)
close
Free Chapter
1
Section 1: Fundamentals of Machine Learning and OpenCV
6
Section 2: Operations with OpenCV
11
Section 3: Advanced Machine Learning with OpenCV

Representing Data and Engineering Features

In the last chapter, we built our very first supervised learning models and applied them to some classic datasets, such as the Iris and the Boston datasets. However, in the real world, data rarely comes in a neat <n_samples x n_features> feature matrix that is part of a pre-packaged database. Instead, it is our responsibility to find a way to represent the data in a meaningful way. The process of finding the best way to represent our data is known as feature engineering, and it is one of the main tasks of data scientists and machine learning practitioners trying to solve real-world problems.

I know you would rather jump right to the end and build the deepest neural network mankind has ever seen. But, trust me, this stuff is important! Representing our data in the right way can have a much greater influence on the performance of...

bookmark search playlist download font-size

Change the font size

margin-width

Change margin width

day-mode

Change background colour

Close icon Search
Country selected

Close icon Your notes and bookmarks

Delete Bookmark

Modal Close icon
Are you sure you want to delete it?
Cancel
Yes, Delete