
Building Big Data Pipelines with Apache Beam
By :

Let's jump right into writing our first pipeline. The first part of this book will focus on Beam's Java SDK. We assume that you are familiar with programming in Java and building a project using Apache Maven (or any similar tool). The following code can be found in the com.packtpub.beam.chapter1.FirstPipeline
class in the chapter1
module in the GitHub repository. We would like you to go through all of the code, but we will highlight the most important parts here:
lorem.txt
. The code is standard Java, as follows:ClassLoader loader = FirstPipeline.class.getClassLoader(); String file = loader.getResource("lorem.txt").getFile(); List<String> lines = Files.readAllLines( Paths.get(file), StandardCharsets.UTF_8);
Pipeline
object, which is a container for a Directed Acyclic Graph (DAG) that represents the data transformations needed to produce output from input data:Pipeline pipeline = Pipeline.create();
Important note
There are multiple ways to create a pipeline, and this is the simplest. We will see different approaches to pipelines in Chapter 2, Implementing, Testing, and Deploying Basic Pipelines.
PCollection
object. Each PCollection
object (that is, parallel collection) can be imagined as a line (an edge) connecting two vertices (PTransforms
, or parallel transforms) in the pipeline's DAG.PCollection
:PCollection<String> input = pipeline.apply(Create.of(lines));
Our DAG will then look like the following diagram:
Figure 1.1 – A pipeline containing a single PTransform
PTransform
can have one main output and possibly multiple side output PCollections. Each PCollection
has to be consumed by another PTransform
or it might be excluded from the execution. As we can see, our main output (PCollection
of PTransform
, called Create
) is not presently consumed by any PTransform
. We connect PTransform
to a PCollection
by applying this PTransform
on the PCollection
. We do that by using the following code:PCollection<String> words = input.apply(Tokenize.of());
This creates a new PTransform
(Tokenize
) and connects it to our input PCollection
, as shown in the following figure:
Figure 1.2 – A pipeline with two PTransforms
We'll skip the details of how the Tokenize PTransform
is implemented for now (we will return to that in Chapter 5, Using SQL for Pipeline Implementation, which describes how to structure code in general). Currently, all we have to remember is that the Tokenize
PTransform
takes input lines of text and splits each line into words, which produces a new PCollection
that contains all of the words from all the lines of the input PCollection
.
PTransforms
. One will produce the well-known word count example, so popular in every big data textbook. And the last one will simply print the output PCollection
to standard output:PCollection<KV<String, Long>> result = words.apply(Count.perElement()); result.apply(PrintElements.of());
Details of both the Count
PTransform
(which is Beam's built-in PTransform
) and PrintElements
(which is a user-defined PTransform
) will be discussed later. For now, if we focus on the pipeline construction process, we can see that our pipeline looks as follows:
Figure 1.3 – The final word count pipeline
pipeline.run().waitUntilFinish();
This causes the pipeline to be passed to a runner (configured in the pipeline; if omitted, it defaults to a runner available on Classpath
). The standard default runner is the DirectRunner
, which executes the pipeline in the local Java Virtual Machine (JVM) only. This runner is mostly only suitable for testing, as we will see in the next chapter.
chapter1
module, which will yield the expected output on standard output:chapter1$ ../mvnw exec:java \ -Dexec.mainClass=com.packtpub.beam.chapter1.FirstPipeline
Important note
The ordering of output is not defined and is likely to vary over multiple runs. This is to be expected and is due to the fact that the pipeline underneath is executed in multiple threads.
PTransform
to PCollection
can be chained, so the preceding code can be simplified to the following:ClassLoader loader = ... FirstPipeline.class.getClassLoader(); String file = loader.getResource("lorem.txt").getFile(); List<String> lines = Files.readAllLines( Paths.get(file), StandardCharsets.UTF_8); Pipeline pipeline = Pipeline.create(); pipeline.apply(Create.of(lines)) .apply(Tokenize.of()) .apply(Count.perElement()) .apply(PrintElements.of()); pipeline.run().waitUntilFinish();
When used with care, this style greatly improves the readability of the code.
Now that we have written our first pipeline, let's see how to port it from a bounded data source to a streaming source!