Book description
Understand the complexities of modern-day data engineering platforms and explore strategies to deal with them with the help of use case scenarios led by an industry expert in big data
Key Features
- Become well-versed with the core concepts of Apache Spark and Delta Lake for building data platforms
- Learn how to ingest, process, and analyze data that can be later used for training machine learning models
- Understand how to operationalize data models in production using curated data
Book Description
In the world of ever-changing data and schemas, it is important to build data pipelines that can auto-adjust to changes. This book will help you build scalable data platforms that managers, data scientists, and data analysts can rely on.
Starting with an introduction to data engineering, along with its key concepts and architectures, this book will show you how to use Microsoft Azure Cloud services effectively for data engineering. You'll cover data lake design patterns and the different stages through which the data needs to flow in a typical data lake. Once you've explored the main features of Delta Lake to build data lakes with fast performance and governance in mind, you'll advance to implementing the lambda architecture using Delta Lake. Packed with practical examples and code snippets, this book takes you through real-world examples based on production scenarios faced by the author in his 10 years of experience working with big data. Finally, you'll cover data lake deployment strategies that play an important role in provisioning the cloud resources and deploying the data pipelines in a repeatable and continuous way.
By the end of this data engineering book, you'll know how to effectively deal with ever-changing data and create scalable data pipelines to streamline data science, ML, and artificial intelligence (AI) tasks.
What you will learn
- Discover the challenges you may face in the data engineering world
- Add ACID transactions to Apache Spark using Delta Lake
- Understand effective design strategies to build enterprise-grade data lakes
- Explore architectural and design patterns for building efficient data ingestion pipelines
- Orchestrate a data pipeline for preprocessing data using Apache Spark and Delta Lake APIs
- Automate deployment and monitoring of data pipelines in production
- Get to grips with securing, monitoring, and managing data pipelines models efficiently
Who this book is for
This book is for aspiring data engineers and data analysts who are new to the world of data engineering and are looking for a practical guide to building scalable data platforms. If you already work with PySpark and want to use Delta Lake for data engineering, you'll find this book useful. Basic knowledge of Python, Spark, and SQL is expected.
Table of contents
- Data Engineering with Apache Spark, Delta Lake, and Lakehouse
- Foreword
- Contributors
- About the author
- About the reviewers
- Preface
- Section 1: Modern Data Engineering and Tools
- Chapter 1: The Story of Data Engineering and Analytics
- Chapter 2: Discovering Storage and Compute Data Lakes
- Chapter 3: Data Engineering on Microsoft Azure
- Section 2: Data Pipelines and Stages of Data Engineering
- Chapter 4: Understanding Data Pipelines
- Chapter 5: Data Collection Stage – The Bronze Layer
-
Chapter 6: Understanding Delta Lake
- Understanding how Delta Lake enables the lakehouse
- Understanding Delta Lake
- Creating a Delta Lake table
- Changing data in an existing Delta Lake table
- Performing time travel
- Performing upserts of data
- Understanding isolation levels
- Understanding concurrency control
- Cleaning up Azure resources
- Summary
- Chapter 7: Data Curation Stage – The Silver Layer
- Chapter 8: Data Aggregation Stage – The Gold Layer
- Section 3: Data Engineering Challenges and Effective Deployment Strategies
- Chapter 9: Deploying and Monitoring Pipelines in Production
- Chapter 10: Solving Data Engineering Challenges
- Chapter 11: Infrastructure Provisioning
- Chapter 12: Continuous Integration and Deployment (CI/CD) of Data Pipelines
- Other Books You May Enjoy
Product information
- Title: Data Engineering with Apache Spark, Delta Lake, and Lakehouse
- Author(s):
- Release date: October 2021
- Publisher(s): Packt Publishing
- ISBN: 9781801077743
You might also like
book
Data Pipelines with Apache Airflow
A successful pipeline moves data efficiently, minimizing pauses and blockages between tasks, keeping every process along …
book
Delta Lake: Up and Running
With the surge in big data and AI, organizations can rapidly create data products. However, the …
book
Modern Data Engineering with Apache Spark: A Hands-On Guide for Building Mission-Critical Streaming Applications
Leverage Apache Spark within a modern data engineering ecosystem. This hands-on guide will teach you how …
book
Spark: The Definitive Guide
Learn how to use, deploy, and maintain Apache Spark with this comprehensive guide, written by the …