Sign In Start Free Trial
Account

Add to playlist

Create a Playlist

Modal Close icon
You need to login to use this feature.
  • Apache Hadoop 3 Quick Start Guide
  • Toc
  • feedback
Apache Hadoop 3 Quick Start Guide

Apache Hadoop 3 Quick Start Guide

By : Vijay Karambelkar
close
Apache Hadoop 3 Quick Start Guide

Apache Hadoop 3 Quick Start Guide

By: Vijay Karambelkar

Overview of this book

Apache Hadoop is a widely used distributed data platform. It enables large datasets to be efficiently processed instead of using one large computer to store and process the data. This book will get you started with the Hadoop ecosystem, and introduce you to the main technical topics, including MapReduce, YARN, and HDFS. The book begins with an overview of big data and Apache Hadoop. Then, you will set up a pseudo Hadoop development environment and a multi-node enterprise Hadoop cluster. You will see how the parallel programming paradigm, such as MapReduce, can solve many complex data processing problems. The book also covers the important aspects of the big data software development lifecycle, including quality assurance and control, performance, administration, and monitoring. You will then learn about the Hadoop ecosystem, and tools such as Kafka, Sqoop, Flume, Pig, Hive, and HBase. Finally, you will look at advanced topics, including real time streaming using Apache Storm, and data analytics using Apache Spark. By the end of the book, you will be well versed with different configurations of the Hadoop 3 cluster.
Table of Contents (10 chapters)
close

Writing Apache Pig scripts

Apache Pig allows users to write custom scripts on top of the MapReduce framework. Pig was founded to offer flexibility in terms of data programming over large data sets and non-Java programmers. Pig can apply multiple transformations on input data in order to produce output on top of a Java virtual machine or an Apache Hadoop multi-node cluster. Pig can be used as a part of ETL (Extract Transform Load) implementations for any big data project.

Setting up Apache Pig in your Hadoop environment is relatively easy compared to other software; all you need to do is download the Pig source and build it to a pig.jar file, which can be used for your programs. Pig-generated compiled artifacts can be deployed on a standalone JVM, Apache Spark, Apache Tez, and MapReduce, and Pig supports six different execution environments (both local and distributed). The respective...

bookmark search playlist download font-size

Change the font size

margin-width

Change margin width

day-mode

Change background colour

Close icon Search
Country selected

Close icon Your notes and bookmarks

Delete Bookmark

Modal Close icon
Are you sure you want to delete it?
Cancel
Yes, Delete