Sign In Start Free Trial
Account

Add to playlist

Create a Playlist

Modal Close icon
You need to login to use this feature.
  • Learning Apache Apex
  • Toc
  • feedback
Learning Apache Apex

Learning Apache Apex

By : Gundabattula, Thomas Weise, Munagala V. Ramanath, David Yan, Kenneth Knowles
5 (1)
close
Learning Apache Apex

Learning Apache Apex

5 (1)
By: Gundabattula, Thomas Weise, Munagala V. Ramanath, David Yan, Kenneth Knowles

Overview of this book

Apache Apex is a next-generation stream processing framework designed to operate on data at large scale, with minimum latency, maximum reliability, and strict correctness guarantees. Half of the book consists of Apex applications, showing you key aspects of data processing pipelines such as connectors for sources and sinks, and common data transformations. The other half of the book is evenly split into explaining the Apex framework, and tuning, testing, and scaling Apex applications. Much of our economic world depends on growing streams of data, such as social media feeds, financial records, data from mobile devices, sensors and machines (the Internet of Things - IoT). The projects in the book show how to process such streams to gain valuable, timely, and actionable insights. Traditional use cases, such as ETL, that currently consume a significant chunk of data engineering resources are also covered. The final chapter shows you future possibilities emerging in the streaming space, and how Apache Apex can contribute to it.
Table of Contents (11 chapters)
close

Beam concepts


The premise for using Beam (and Apex) is that you are processing some massive datasets and/or data streams, so massive that they cannot be processed by conventional means on a single machine. You will need a fleet of computers and a programming model that somewhat automatically scales out to saturate all of your computers.

Pipelines, PTransforms, and PCollections

In Beam, you organize your processing into a directed graph called a pipeline. You may illustrate it something like this:

The boxes are parallel computations that are called PTransforms. Note how one of the boxes contains a small subgraph—almost all PTransforms are actually encapsulated subgraphs, including both Join and Filter. The arrows represent your data flowing from one PTransform to another as a PCollection. A PCollection can be bounded as with a classic static dataset like a massive collection of logs or a database snapshot. In this case, it is finite and you know it. However, a PCollection can just as easily...

Unlock full access

Continue reading for free

A Packt free trial gives you instant online access to our library of over 7000 practical eBooks and videos, constantly updated with the latest in tech
bookmark search playlist font-size

Change the font size

margin-width

Change margin width

day-mode

Change background colour

Close icon Search
Country selected

Close icon Your notes and bookmarks

Delete Bookmark

Modal Close icon
Are you sure you want to delete it?
Cancel
Yes, Delete