Sign In Start Free Trial
Account

Add to playlist

Create a Playlist

Modal Close icon
You need to login to use this feature.
  • Building Big Data Pipelines with Apache Beam
  • Toc
  • feedback
Building Big Data Pipelines with Apache Beam

Building Big Data Pipelines with Apache Beam

By : Lukavský
3.7 (9)
close
Building Big Data Pipelines with Apache Beam

Building Big Data Pipelines with Apache Beam

3.7 (9)
By: Lukavský

Overview of this book

Apache Beam is an open source unified programming model for implementing and executing data processing pipelines, including Extract, Transform, and Load (ETL), batch, and stream processing. This book will help you to confidently build data processing pipelines with Apache Beam. You’ll start with an overview of Apache Beam and understand how to use it to implement basic pipelines. You’ll also learn how to test and run the pipelines efficiently. As you progress, you’ll explore how to structure your code for reusability and also use various Domain Specific Languages (DSLs). Later chapters will show you how to use schemas and query your data using (streaming) SQL. Finally, you’ll understand advanced Apache Beam concepts, such as implementing your own I/O connectors. By the end of this book, you’ll have gained a deep understanding of the Apache Beam model and be able to apply it to solve problems.
Table of Contents (13 chapters)
close
1
Section 1 Apache Beam: Essentials
5
Section 2 Apache Beam: Toward Improving Usability
9
Section 3 Apache Beam: Advanced Concepts

Defining splittable DoFn as a unification for bounded and unbounded sources

Beam offers a wide variety of source and sink transforms. We will not walk through them in this book because their details can be easily found online. In this book, we have used the KafkaIO transform heavily – other source and sink transforms are used analogously but specifically on the target storage system.

The question that arises is this: what should we do when there are either specific requirements for the way the data is read (or stored) or when we need to connect to a data source that Beam does not have a connector for? Let's first see how to implement a custom source.

A fundamental requirement for any source is that it has the ability to split itself. We need to split a bounded source in order to be able to parallelize its processing and we need to split an unbounded source to get a persistent moment in time we can return to in case of a failure. Such a moment in time is typically...

bookmark search playlist download font-size

Change the font size

margin-width

Change margin width

day-mode

Change background colour

Close icon Search
Country selected

Close icon Your notes and bookmarks

Delete Bookmark

Modal Close icon
Are you sure you want to delete it?
Cancel
Yes, Delete