Sign In Start Free Trial
Account

Add to playlist

Create a Playlist

Modal Close icon
You need to login to use this feature.
  • Machine Learning with Apache Spark Quick Start Guide
  • Toc
  • feedback
Machine Learning with Apache Spark Quick Start Guide

Machine Learning with Apache Spark Quick Start Guide

By : Quddus
close
Machine Learning with Apache Spark Quick Start Guide

Machine Learning with Apache Spark Quick Start Guide

By: Quddus

Overview of this book

Every person and every organization in the world manages data, whether they realize it or not. Data is used to describe the world around us and can be used for almost any purpose, from analyzing consumer habits to fighting disease and serious organized crime. Ultimately, we manage data in order to derive value from it, and many organizations around the world have traditionally invested in technology to help process their data faster and more efficiently. But we now live in an interconnected world driven by mass data creation and consumption where data is no longer rows and columns restricted to a spreadsheet, but an organic and evolving asset in its own right. With this realization comes major challenges for organizations: how do we manage the sheer size of data being created every second (think not only spreadsheets and databases, but also social media posts, images, videos, music, blogs and so on)? And once we can manage all of this data, how do we derive real value from it? The focus of Machine Learning with Apache Spark is to help us answer these questions in a hands-on manner. We introduce the latest scalable technologies to help us manage and process big data. We then introduce advanced analytical algorithms applied to real-world use cases in order to uncover patterns, derive actionable insights, and learn from this big data.
Table of Contents (10 chapters)
close

Summary

In this chapter, we have explored a new breed of distributed and scalable technologies that allow us to reliably and securely store, manage, model, transform, process, and analyze huge volumes of structured, semi-structured, and unstructured data in both batch and real time in order to derive real actionable insights using advanced analytics.

In the next chapter, we will guide you through how to install, configure, deploy, and administer a single-node analytical development environment utilizing a subset of these technologies, including Apache Spark, Apache Kafka, Jupyter Notebook, Python, Java, and Scala!

bookmark search playlist download font-size

Change the font size

margin-width

Change margin width

day-mode

Change background colour

Close icon Search
Country selected

Close icon Your notes and bookmarks

Delete Bookmark

Modal Close icon
Are you sure you want to delete it?
Cancel
Yes, Delete