Sign In Start Free Trial
Account

Add to playlist

Create a Playlist

Modal Close icon
You need to login to use this feature.
  • Book Overview & Buying Data Engineering with Databricks Cookbook
  • Table Of Contents Toc
  • Feedback & Rating feedback
Data Engineering with Databricks Cookbook

Data Engineering with Databricks Cookbook

By : Pulkit Chadha
4.4 (7)
close
close
Data Engineering with Databricks Cookbook

Data Engineering with Databricks Cookbook

4.4 (7)
By: Pulkit Chadha

Overview of this book

Written by a Senior Solutions Architect at Databricks, Data Engineering with Databricks Cookbook will show you how to effectively use Apache Spark, Delta Lake, and Databricks for data engineering, starting with comprehensive introduction to data ingestion and loading with Apache Spark. What makes this book unique is its recipe-based approach, which will help you put your knowledge to use straight away and tackle common problems. You’ll be introduced to various data manipulation and data transformation solutions that can be applied to data, find out how to manage and optimize Delta tables, and get to grips with ingesting and processing streaming data. The book will also show you how to improve the performance problems of Apache Spark apps and Delta Lake. Advanced recipes later in the book will teach you how to use Databricks to implement DataOps and DevOps practices, as well as how to orchestrate and schedule data pipelines using Databricks Workflows. You’ll also go through the full process of setup and configuration of the Unity Catalog for data governance. By the end of this book, you’ll be well-versed in building reliable and scalable data pipelines using modern data engineering technologies.
Table of Contents (16 chapters)
close
close
Free Chapter
1
Part 1 – Working with Apache Spark and Delta Lake
9
Part 2 – Data Engineering Capabilities within Databricks

Reducing Delta Lake table size and I/O cost with compression

Delta Lake tables are stored as Parquet files in a directory, along with a transaction log that tracks changes to the table. One of the benefits of using Delta Lake is that it supports various compression codecs for Parquet files, such as gzip, snappy, lzo, zstd, and brotli. Compression can help reduce the size of the table on disk and the amount of data transferred over the network, which can improve performance and save costs.

In this recipe, we will learn how to use compression with Delta Lake tables and how to measure the impact of compression on table size and I/O cost.

How to do it…

  1. Import the required libraries: Start by importing the necessary libraries for working with Delta Lake. In this case, we need the delta module and the SparkSession class from the pyspark.sql module:
    from delta import configure_spark_with_delta_pip, DeltaTable
    from pyspark.sql import SparkSession
    from pyspark.sql.functions...

Unlock full access

Continue reading for free

A Packt free trial gives you instant online access to our library of over 7000 practical eBooks and videos, constantly updated with the latest in tech
bookmark search playlist download font-size

Change the font size

margin-width

Change margin width

day-mode

Change background colour

Close icon Search
Country selected

Close icon Your notes and bookmarks

Delete Bookmark

Modal Close icon
Are you sure you want to delete it?
Cancel
Yes, Delete

Confirmation

Modal Close icon
claim successful

Buy this book with your credits?

Modal Close icon
Are you sure you want to buy this book with one of your credits?
Close
YES, BUY