Sign In Start Free Trial
Account

Add to playlist

Create a Playlist

Modal Close icon
You need to login to use this feature.
  • Book Overview & Buying Simplifying Data Engineering and Analytics with Delta
  • Table Of Contents Toc
  • Feedback & Rating feedback
Simplifying Data Engineering and Analytics with Delta

Simplifying Data Engineering and Analytics with Delta

By : Anindita Mahapatra
4.9 (15)
close
close
Simplifying Data Engineering and Analytics with Delta

Simplifying Data Engineering and Analytics with Delta

4.9 (15)
By: Anindita Mahapatra

Overview of this book

Delta helps you generate reliable insights at scale and simplifies architecture around data pipelines, allowing you to focus primarily on refining the use cases being worked on. This is especially important when you consider that existing architecture is frequently reused for new use cases. In this book, you’ll learn about the principles of distributed computing, data modeling techniques, and big data design patterns and templates that help solve end-to-end data flow problems for common scenarios and are reusable across use cases and industry verticals. You’ll also learn how to recover from errors and the best practices around handling structured, semi-structured, and unstructured data using Delta. After that, you’ll get to grips with features such as ACID transactions on big data, disciplined schema evolution, time travel to help rewind a dataset to a different time or version, and unified batch and streaming capabilities that will help you build agile and robust data products. By the end of this Delta book, you’ll be able to use Delta as the foundational block for creating analytics-ready data that fuels all AI/BI use cases.
Table of Contents (18 chapters)
close
close
1
Section 1 – Introduction to Delta Lake and Data Engineering Principles
5
Section 2 – End-to-End Process of Building Delta Pipelines
13
Section 3 – Operationalizing and Productionalizing Delta Pipelines

Delta cloning

Cloning is the process of making a copy. In the previous section, we started out by saying that we should try to minimize data movement and data copies whenever possible because there will always be a lot of effort required to keep things in sync and reconcile data. However, there are some cases where it is inevitable for business requirements. For example, there may be a scenario for data archiving, trying to reproduce an ML flow experiment in a different environment, short-term experimental runs on production data, the need to share data with a different LOB, or maybe the need to tweak a few table properties without affecting original source especially if there are consumers leveraging it with some assumptions.

Shallow cloning refers to copying metadata and deep cloning refers to copying both metadata and data. If shallow cloning suffices, it should be preferred as it is light and inexpensive, whereas deep cloning is a more involved process.

...

Unlock full access

Continue reading for free

A Packt free trial gives you instant online access to our library of over 7000 practical eBooks and videos, constantly updated with the latest in tech

Create a Note

Modal Close icon
You need to login to use this feature.
notes
bookmark search playlist download font-size

Change the font size

margin-width

Change margin width

day-mode

Change background colour

Close icon Search
Country selected

Close icon Your notes and bookmarks

Delete Bookmark

Modal Close icon
Are you sure you want to delete it?
Cancel
Yes, Delete

Delete Note

Modal Close icon
Are you sure you want to delete it?
Cancel
Yes, Delete

Confirmation

Modal Close icon
claim successful

Buy this book with your credits?

Modal Close icon
Are you sure you want to buy this book with one of your credits?
Close
YES, BUY