
Optimizing Databricks Workloads
By :

Databricks provides a collaborative platform for data engineering and data science. Powered by the potential of Apache Spark™, Databricks helps enable ML at scale. It has also revolutionized the existing data lakes by introducing the Lakehouse architecture. You can refer to the following published whitepaper to learn about the Lakehouse architecture in detail: http://cidrdb.org/cidr2021/papers/cidr2021_paper17.pdf.
Irrespective of the data role in any industry, Databricks has something for everybody:
Databricks and Spark together provide a unified platform for big data processing in the cloud. This is possible because Spark is a compute engine that remains decoupled from storage. Spark in Databricks combines ETL, ML, and real-time streaming with collaborative notebooks. Processing in Databricks can scale to petabytes of data and thousands of nodes in no time!
Spark can connect to any number of data sources, including Amazon S3, Azure Data Lake, HDFS, Kafka, and many more. As Databricks lives in the cloud, spinning up a Spark cluster is possible with the click of a button. We do not need to worry about setting up infrastructure to use Databricks. This enables us to focus on the data at hand and continue solving problems.
Currently, Databricks is available on all four major cloud platforms, Amazon Web Services (AWS), Microsoft Azure, Google Cloud Platform, and Alibaba Cloud. In this book, we will be working on Azure Databricks with the standard pricing tier. Databricks is a first-party service in Azure and is deeply integrated with the complete Azure ecosystem.
Since Azure Databricks is a cloud-native managed service, there is a cost associated with its usage. To view the Databricks pricing, check out https://azure.microsoft.com/en-in/pricing/details/databricks/.
To create a Databricks instance in Azure, we will need an Azure subscription and a resource group. An Azure subscription is a gateway to Microsoft's cloud services. It entitles us to create and use Azure's services. A resource group in Azure is equivalent to a logical container that hosts the different services. To create an Azure Databricks instance, we need to complete the following steps:
Azure Databricks
and select it from the drop-down menu. Click on Create.Figure 1.5 – Creating Azure Databricks
Figure 1.6 – Creating an Azure Databricks workspace
Figure 1.7 – Azure Databricks workspace
Now that we have a workspace up and running, let's explore how we can apply it to different concepts.
The Databricks workspace menu is displayed on the left pane. We can configure the menu based on our workloads, Data Science and Engineering or Machine Learning. Let's start with the former. We will learn more about the ML functionalities in Chapter 3, Learning about Machine Learning and Graph Processing with Databricks. The menu consists of the following:
Figure 1.8 – Clusters in Azure Databricks
Now that we have an understanding of the core concepts of Databricks, let's create our first Spark cluster!
It is time to create our first cluster! In the following steps, we will create an all-purpose cluster and later attach it to a notebook. We will be discussing cluster configurations in detail in Chapter 4, Managing Spark Clusters:
01
. Keep the Spot instances checkbox disabled. When enabled, the cluster uses Azure Spot VMs to save costs.Figure 1.9 – Initializing a Databricks cluster
With the Spark cluster initialized, let's create our first Databricks notebook!
Now we'll create our first Databricks notebook. On the left pane menu, click on Create and select Notebook. Give a suitable name to the notebook, keep the Default Language option as Python, set Cluster, and click on Create.
Figure 1.10 – Creating a Databricks notebook
We can create documentation cells to independently run blocks of code. A new cell can be created with the click of a button. For people who have worked with Jupyter Notebooks, this interface might look familiar.
We can also execute code in different languages right inside one notebook. For example, the first notebook that we've created has a default language of Python, but we can also run code in Scala, SQL, and R in the same notebook! This is made possible with the help of magic commands. We need to specify the magic command at the beginning of a new cell:
%python
or %py
%r
%scala
%sql
Note
The %pip
magic command can also be used in Databricks notebooks to manage notebook-scoped libraries.
Let us look at executing code in multiple languages in the following image:
Figure 1.11 – Executing code in multiple languages
We can also render a cell as Markdown using the %md
magic command. This allows us to add rendered text between cells of code.
Databricks notebooks also support rendering HTML graphics using the displayHTML
function. Currently, this feature is only supported for Python, R, and Scala notebooks. To use the function, we need to pass in HTML, CSS, or JavaScript code:
Figure 1.12 – Rendering HTML in a notebook
We can use the %sh
magic command to run shell commands on the driver node.
Databricks provides a Databricks Utilities (dbutils) module to perform tasks collectively. With dbutils
, we can work with external storage, parametrize notebooks, and handle secrets. To list the available functionalities of dbutils
, we can run dbutils.help()
in a Python or Scala notebook.
The notebooks consist of another feature called widgets. These widgets help to add parameters to a notebook and are made available with the dbutils
module. By default, widgets are visible at the top of a notebook and are categorized as follows:
Figure 1.13 – Notebook widget example. Here, we create a text widget, fetch its value, and call it in a print statement
We can also run one notebook inside another using the %run
magic command. The magic command must be followed by the notebook path.
Figure 1.14 – Using the %run magic command
DBFS is a filesystem mounted in every Databricks workspace for temporary storage. It is an abstraction on top of a scalable object store in the cloud. For instance, in the case of Azure Databricks, the DBFS is built on top of Azure Blob Storage. But this is managed for us, so we needn't worry too much about how and where the DBFS is actually located. All we need to understand is how can we use DBFS inside a Databricks workspace.
DBFS helps us in the following ways:
DBFS has a default storage location called the DBFS root. We can access DBFS in several ways:
%fs
magic command: We can use the %fs
command in a notebook cell.dbutils
: We can call the dbutils
module to access the DBFS. Using dbutils.fs.ls("<path>")
is equivalent to running %fs ls <path>
. Here, <path>
is a DBFS path. Both these commands list the directories in a specific DBFS "path."Figure 1.15 – Listing all files in the DBFS root using the %fs magic command
Note
We need to enclose dbutils.fs.ls("path")
in Databricks' display()
function to obtain a rendered output.
A Databricks job helps to run and automate activities such as an ETL job or a data analytics task. A job can be executed either immediately or on a scheduled basis. A job can be created by using the UI or CLI or invoking the Jobs UI. We will now create a job using the Databricks UI:
jobs-notebook
and paste the following code in a new cell. This code creates a new delta table and inserts records into the table. We'll learn about Delta Lake in more detail later in this chapter. Note that the following two code blocks must be run in the same cell.The following code block creates a delta table in Databricks with the name of insurance_claims
. The table has four columns, user_id
, city
, country
, and amount
:
%sql -- Creating a delta table and storing data in DBFS -- Our table's name is 'insurance_claims' and has four columns CREATE OR REPLACE TABLE insurance_claims ( user_id INT NOT NULL, city STRING NOT NULL, country STRING NOT NULL, amount INT NOT NULL ) USING DELTA LOCATION 'dbfs:/tmp/insurance_claims';
Now, we will insert five records into the table. In the following code block, every INSERT INTO
statement inserts one new record into the delta table:
INSERT INTO insurance_claims (user_id, city, country, amount) VALUES (100, 'Mumbai', 'India', 200000); INSERT INTO insurance_claims (user_id, city, country, amount) VALUES (101, 'Delhi', 'India', 400000); INSERT INTO insurance_claims (user_id, city, country, amount) VALUES (102, 'Chennai', 'India', 100000); INSERT INTO insurance_claims (user_id, city, country, amount) VALUES (103, 'Bengaluru', 'India', 700000);
jobs-notebook
notebook that we created, and in Cluster, select an existing all-purpose cluster.Figure 1.16 – Successful manual run of a Databricks job
Databricks Community is a platform that provides a free-of-cost Databricks workspace. It supports a single node cluster wherein we have one driver and no workers. The community platform is great for beginners to get started with Databricks. But several features of Azure Databricks are not supported in the Community edition. For example, we cannot create jobs or change cluster configuration settings. To sign up for Databricks Community, visit https://community.cloud.databricks.com/login.html.
Change the font size
Change margin width
Change background colour