Sign In Start Free Trial
Account

Add to playlist

Create a Playlist

Modal Close icon
You need to login to use this feature.
  • Optimizing Databricks Workloads
  • Toc
  • feedback
Optimizing Databricks Workloads

Optimizing Databricks Workloads

By : Anirudh Kala, Bhatnagar, Sarbahi
4.1 (13)
close
Optimizing Databricks Workloads

Optimizing Databricks Workloads

4.1 (13)
By: Anirudh Kala, Bhatnagar, Sarbahi

Overview of this book

Databricks is an industry-leading, cloud-based platform for data analytics, data science, and data engineering supporting thousands of organizations across the world in their data journey. It is a fast, easy, and collaborative Apache Spark-based big data analytics platform for data science and data engineering in the cloud. In Optimizing Databricks Workloads, you will get started with a brief introduction to Azure Databricks and quickly begin to understand the important optimization techniques. The book covers how to select the optimal Spark cluster configuration for running big data processing and workloads in Databricks, some very useful optimization techniques for Spark DataFrames, best practices for optimizing Delta Lake, and techniques to optimize Spark jobs through Spark core. It contains an opportunity to learn about some of the real-world scenarios where optimizing workloads in Databricks has helped organizations increase performance and save costs across various domains. By the end of this book, you will be prepared with the necessary toolkit to speed up your Spark jobs and process your data more efficiently.
Table of Contents (13 chapters)
close
1
Section 1: Introduction to Azure Databricks
5
Section 2: Optimization Techniques
10
Section 3: Real-World Scenarios

Learning about AQE

We already know how Spark works under the hood. Whenever we execute transformations, Spark prepares a plan, and as soon as an action is called, it performs those transformations. Now, it's time to expand that knowledge. Let's dive deeper into Spark's query execution mechanism.

Every time a query is executed by Spark, it is done with the help of the following four plans:

  • Parsed Logical Plan: Spark prepares a Parsed Logical Plan, where it checks the metadata (table name, column names, and more) to confirm whether the respective entities exist or not.
  • Analyzed Logical Plan: Spark accepts the Parsed Logical Plan and converts it into what is called the Analyzed Logical Plan. This is then sent to Spark's catalyst optimizer, which is an advanced query optimizer for Spark.
  • Optimized Logical Plan: The catalyst optimizer applies further optimizations and comes up with the final logical plan, called the Optimized Logical Plan.
  • Physical...
bookmark search playlist download font-size

Change the font size

margin-width

Change margin width

day-mode

Change background colour

Close icon Search
Country selected

Close icon Your notes and bookmarks

Delete Bookmark

Modal Close icon
Are you sure you want to delete it?
Cancel
Yes, Delete