Sign In Start Free Trial
Account

Add to playlist

Create a Playlist

Modal Close icon
You need to login to use this feature.
  • Book Overview & Buying Big Data Analytics with Hadoop 3
  • Table Of Contents Toc
  • Feedback & Rating feedback
Big Data Analytics with Hadoop 3

Big Data Analytics with Hadoop 3

By : Sridhar Alla
3 (1)
close
close
Big Data Analytics with Hadoop 3

Big Data Analytics with Hadoop 3

3 (1)
By: Sridhar Alla

Overview of this book

Apache Hadoop is the most popular platform for big data processing, and can be combined with a host of other big data tools to build powerful analytics solutions. Big Data Analytics with Hadoop 3 shows you how to do just that, by providing insights into the software as well as its benefits with the help of practical examples. Once you have taken a tour of Hadoop 3’s latest features, you will get an overview of HDFS, MapReduce, and YARN, and how they enable faster, more efficient big data processing. You will then move on to learning how to integrate Hadoop with the open source tools, such as Python and R, to analyze and visualize data and perform statistical computing on big data. As you get acquainted with all this, you will explore how to use Hadoop 3 with Apache Spark and Apache Flink for real-time data analytics and stream processing. In addition to this, you will understand how to use Hadoop to build analytics solutions on the cloud and an end-to-end pipeline to perform big data analysis using practical use cases. By the end of this book, you will be well-versed with the analytical capabilities of the Hadoop ecosystem. You will be able to build powerful solutions to perform big data analytics and get insight effortlessly.
Table of Contents (13 chapters)
close
close
4
Scientific Computing and Big Data Analysis with Python and Hadoop

DataFrame APIs and the SQL API


A DataFrame can be created in several ways; some of them are as follows:

  • Execute SQL queries, load external data such as Parquet, JSON, CSV, Text, Hive, JDBC, and so on
  • Convert RDDs to DataFrames
  • Load a CSV file

We will take a look at statesPopulation.csv here, which we will then load as a DataFrame.

The CSV has the following format of the population of US states from the years 2010 to 2016:

State

Year

Population

Alabama

2010

47,85,492

Alaska

2010

714,031

Arizona

2010

64,08,312

Arkansas

2010

2,921,995

California

2010

37,332,685

Since this CSV has a header, we can use it to quickly load into a DataFrame with an implicit schema detection:

scala> val statesDF = spark.read.option("header",
"true").option("inferschema", "true").option("sep",
",").csv("statesPopulation.csv")
statesDF: org.apache.spark.sql.DataFrame = [State: string, Year: int ... 1
more field]

Once we load the DataFrame, it can be examined for the schema:

scala> statesDF.printSchema
root
|-- State: string (nullable ...

Unlock full access

Continue reading for free

A Packt free trial gives you instant online access to our library of over 7000 practical eBooks and videos, constantly updated with the latest in tech

Create a Note

Modal Close icon
You need to login to use this feature.
notes
bookmark search playlist download font-size

Change the font size

margin-width

Change margin width

day-mode

Change background colour

Close icon Search
Country selected

Close icon Your notes and bookmarks

Delete Bookmark

Modal Close icon
Are you sure you want to delete it?
Cancel
Yes, Delete

Delete Note

Modal Close icon
Are you sure you want to delete it?
Cancel
Yes, Delete

Edit Note

Modal Close icon
Write a note (max 255 characters)
Cancel
Update Note

Confirmation

Modal Close icon
claim successful

Buy this book with your credits?

Modal Close icon
Are you sure you want to buy this book with one of your credits?
Close
YES, BUY