
Learning Apache Spark 2
By :

As mentioned in the earlier pages, while Spark can be deployed on a cluster, you can also run it in local mode on a single machine.
In this chapter, we are going to download and install Apache Spark on a Linux machine and run it in local mode. Before we do anything we need to download Apache Spark from Apache's web page for the Spark project:
If you are using Windows, please remember to use a pathname without any spaces.
The TAR utility is generally used to unpack TAR files. If you don't have TAR, you might want to download that from the repository or use 7-ZIP, which is also one of my favorite utilities.
The bin
folder contains a number of executable shell scripts such as pypark
, sparkR
, spark-shell
, spark-sql
, and spark-submit
. All of these executables are used to interact with Spark, and we will be using most if not all of these.
yarn
. The example below is a Spark that was built for Hadoop version 2.7 which comes with YARN as a cluster manager.
Figure 1.2: Spark folder contents
We'll start by running Spark shell, which is a very simple way to get started with Spark and learn the API. Spark shell is a Scala Read-Evaluate-Print-Loop (REPL), and one of the few REPLs available with Spark which also include Python and R.
You should change to the Spark download directory and run the Spark shell as follows: /bin/spark-shell
Figure 1.3: Starting Spark shell
We now have Spark running in standalone mode. We'll discuss the details of the deployment architecture a bit later in this chapter, but now let's kick start some basic Spark programming to appreciate the power and simplicity of the Spark framework.