Sign In Start Free Trial
Account

Add to playlist

Create a Playlist

Modal Close icon
You need to login to use this feature.
  • Book Overview & Buying Optimizing Databricks Workloads
  • Table Of Contents Toc
  • Feedback & Rating feedback
Optimizing Databricks Workloads

Optimizing Databricks Workloads

By : Anirudh Kala, Bhatnagar, Sarbahi
4.1 (13)
close
close
Optimizing Databricks Workloads

Optimizing Databricks Workloads

4.1 (13)
By: Anirudh Kala, Bhatnagar, Sarbahi

Overview of this book

Databricks is an industry-leading, cloud-based platform for data analytics, data science, and data engineering supporting thousands of organizations across the world in their data journey. It is a fast, easy, and collaborative Apache Spark-based big data analytics platform for data science and data engineering in the cloud. In Optimizing Databricks Workloads, you will get started with a brief introduction to Azure Databricks and quickly begin to understand the important optimization techniques. The book covers how to select the optimal Spark cluster configuration for running big data processing and workloads in Databricks, some very useful optimization techniques for Spark DataFrames, best practices for optimizing Delta Lake, and techniques to optimize Spark jobs through Spark core. It contains an opportunity to learn about some of the real-world scenarios where optimizing workloads in Databricks has helped organizations increase performance and save costs across various domains. By the end of this book, you will be prepared with the necessary toolkit to speed up your Spark jobs and process your data more efficiently.
Table of Contents (13 chapters)
close
close
1
Section 1: Introduction to Azure Databricks
5
Section 2: Optimization Techniques
10
Section 3: Real-World Scenarios

Batch ETL process demo

Databricks professionals often talk about a medallion architecture. In this architecture, data processing in data pipelines is divided into three categories. We call them the bronze, silver, and gold layers. The bronze layer is often the raw data, the silver is the cleansed data, and the gold layer consists of aggregated or modeled data.

Check out https://databricks.com/solutions/data-pipelines for more information. In this section, we will walk through a real-world batch ETL process. We will perform the following steps:

  • Read the data and create a Spark DataFrame.
  • Perform transformations to clean the data and implement business logic.
  • Write the DataFrame in the Delta Lake.
  • Create a Delta table from written data and perform exploratory data analysis.

The dataset that we will be working with is part of databricks-datasets and located in the following directory:

dbfs:/databricks-datasets/samples/lending_club/parquet/

So, create...

Unlock full access

Continue reading for free

A Packt free trial gives you instant online access to our library of over 7000 practical eBooks and videos, constantly updated with the latest in tech

Create a Note

Modal Close icon
You need to login to use this feature.
notes
bookmark search playlist download font-size

Change the font size

margin-width

Change margin width

day-mode

Change background colour

Close icon Search
Country selected

Close icon Your notes and bookmarks

Delete Bookmark

Modal Close icon
Are you sure you want to delete it?
Cancel
Yes, Delete

Delete Note

Modal Close icon
Are you sure you want to delete it?
Cancel
Yes, Delete

Edit Note

Modal Close icon
Write a note (max 255 characters)
Cancel
Update Note

Confirmation

Modal Close icon
claim successful

Buy this book with your credits?

Modal Close icon
Are you sure you want to buy this book with one of your credits?
Close
YES, BUY