Sign In Start Free Trial
Account

Add to playlist

Create a Playlist

Modal Close icon
You need to login to use this feature.
  • Apache Spark 2.x for Java Developers
  • Toc
  • feedback
Apache Spark 2.x for Java Developers

Apache Spark 2.x for Java Developers

By : Kumar, Gulati
2 (4)
close
Apache Spark 2.x for Java Developers

Apache Spark 2.x for Java Developers

2 (4)
By: Kumar, Gulati

Overview of this book

Apache Spark is the buzzword in the big data industry right now, especially with the increasing need for real-time streaming and data processing. While Spark is built on Scala, the Spark Java API exposes all the Spark features available in the Scala version for Java developers. This book will show you how you can implement various functionalities of the Apache Spark framework in Java, without stepping out of your comfort zone. The book starts with an introduction to the Apache Spark 2.x ecosystem, followed by explaining how to install and configure Spark, and refreshes the Java concepts that will be useful to you when consuming Apache Spark's APIs. You will explore RDD and its associated common Action and Transformation Java APIs, set up a production-like clustered environment, and work with Spark SQL. Moving on, you will perform near-real-time processing with Spark streaming, Machine Learning analytics with Spark MLlib, and graph processing with GraphX, all using various Java packages. By the end of the book, you will have a solid foundation in implementing components in the Spark framework in Java to build fast, real-time applications.
Table of Contents (12 chapters)
close

Spark SQL operations


Working in Spark SQL primarily happens in three stages: the creation of dataset, applying SQL operations, and finally persisting the dataset. We have so far been able to create a dataset from RDD and other data sources (refer to Chapter 5, Working with Data and Storage) and also persist the dataset as discussed in the previous section. Now let's look at some of the ways in which SQL operations can be applied to a dataset.

Untyped dataset operation

Once we have created the dataset, then Spark provides a couple of handy functions which perform basic SQL operation and analysis, such as the following:

  • show(): This displays the top 20 rows of the dataset in a tabular form. Strings of more than 20 characters will be truncated, and all cells will be aligned right:
emp_ds.show();

Another variant of the show() function allows the user to enable or disable the 20 characters limit in the show() function by passing a Boolean as false to disable truncation of the string:

emp_ds.show(false...
bookmark search playlist download font-size

Change the font size

margin-width

Change margin width

day-mode

Change background colour

Close icon Search
Country selected

Close icon Your notes and bookmarks

Delete Bookmark

Modal Close icon
Are you sure you want to delete it?
Cancel
Yes, Delete