Video description
Building frameworks is now an industry norm and it has become an important skill to know how to visualize, design, plan, and implement data frameworks. The framework that we are going to build together is the Metadata-Driven Ingestion Framework. Metadata-driven frameworks allow a company to develop the system just once and it can be adopted and reused by various business clusters without the need for additional development, thus saving the business time and costs. Think of it as a plug-and-play system.
The first objective of the course is to onboard you onto the Azure Data Factory platform to help you assemble your first Azure Data Factory pipeline. Once you get a good grip on the Azure Data Factory development pattern, then it becomes easier to adopt the same pattern to onboard other sources and data sinks.
Once you are comfortable with building a basic Azure Data Factory pipeline, as a second objective, we then move on to building a fully-fledged and working metadata-driven framework to make the ingestion more dynamic; furthermore, we will build the framework in such a way that you can audit every batch orchestration and individual pipeline runs for business intelligence and operational monitoring.
By the end of this course, you will be able to design, implement, and get production-ready for data ingestion in Azure.
What You Will Learn
- Learn about Azure Data Factory and Azure Blob Storage
- Understand data engineering, data lake, and metadata-driven frameworks concepts
- Look at the industry-based example of how to build ingestion frameworks
- Learn dynamic Azure Data Factory pipelines and email notifications with logic apps
- Study tracking of pipelines and batch runs
- Look at version management with Azure DevOps
Audience
This course is ideal for aspiring data engineers and developers that are curious about Azure Data Factory as an ETL alternative.
You will need a basic PC/laptop; no prior knowledge of Microsoft Azure is required.
About The Author
David Mngadi: David Mngadi is a data management professional who is influenced by the power of data in our lives and has helped several companies become more data-driven to gain a competitive edge as well as meet the regulatory requirements. In the last 15 years, he has had the pleasure of designing and implementing data warehousing solutions in retail, telco, and banking industries, and recently in more big data lake-specific implementations. He is passionate about technology and teaching programming online.
Table of contents
-
Chapter 1 : Introduction – Build Your First Azure Data Pipeline
- Introduction to the Course
- Introduction to ADF (Azure Data Factory)
- Requirements Discussion and Technical Architecture
- Register a Free Azure Account
- Create a Data Factory Resource
- Create a Storage Account and Upload Data
- Create Data Lake Gen 2 Storage Account
- Download Storage Explorer
- Create Your First Azure Pipeline
- Closing Remarks
-
Chapter 2 : Metadata-Driven Ingestion
- Introduction to Metadata-Driven Ingestion
- High-Level Plan
- Create Active Directory User
- Assign the Contributor Role to the User
- Disable Security Defaults
- Creating the Metadata Database
- Install Azure Data Studio
- Create Metadata Tables and Stored Procedures
- Reconfigure Existing Data Factory Artifacts
- Set Up Logic App to Handle Email Notifications
- Modify the Data Factory Pipeline to Send an Email Notification
- Create Linked Service for Metadata Database and Email Dataset
- Create Utility Pipeline to Send Email Notifications
- Explaining the Email Recipients Table
- Explaining the Get Email Addresses Stored Procedure
- Modify Ingestion Pipeline to Use the Email Utility Pipeline
- Tracking the Triggered Pipeline
- Making the Email Notifications Dynamic
- Making Logging of Pipeline Information Dynamic
- Add a New Way to Log the Main Ingestion Pipeline
- Change the Logging of Pipelines to Send Fail Message Only
- Creating Dynamic Datasets
- Reading from Source to Target - Part 1
- Reading from Source to Target - Part 2
- Explaining the Source to Target Stored Procedure
- Add Orchestration Pipeline - Part 1
- Add Orchestration Pipeline - Part 2
- Fixing the Duplicating Batch Ingestions
- Understanding the Pipeline Log and Related Tables
- Understanding the GetBatch Stored Procedure
- Understanding the Set Batch Status and GetRunID
- Setting Up an Azure DevOps Git Repository
- Publishing the Data Factory to Azure DevOps
- Closing Remarks
-
Chapter 3 : Event-Driven Ingestion
- Introduction
- Read from Azure Storage Plan
- Create Finance Container and Upload Files
- Create Source Dataset
- Write to Data Lake - Raw Plan
- Create Finance Container and Directories
- Create Sink Dataset
- Data Factory Pipeline Plan
- Create Data Factory and Read Metadata
- Add Filter by CSV
- Add Dataset to Read Files
- Add the For Each CSV File Activity and Test Ingestion
- Adding the Event-Based Trigger Plan
- Enable the Event Grid Provider
- Delete File and Add Event-Based Trigger
- Create Event-Based Trigger
- Publish Code to Main Branch and Start Trigger
- Trigger Event-Based Ingestion
- Closing Remarks
Product information
- Title: Azure Data Factory for Beginners - Build Data Ingestion
- Author(s):
- Release date: June 2022
- Publisher(s): Packt Publishing
- ISBN: 9781804610329
You might also like
book
Azure Data Factory Cookbook - Second Edition
Data Engineers guide to solve real-world problems encountered while building and transforming data pipelines using Azure's …
book
Azure Data Factory Cookbook
Solve real-world data problems and create data-driven workflows for easy data movement and processing at scale …
book
Azure Data Engineering Cookbook
Over 90 recipes to help you orchestrate modern ETL/ELT workflows and perform analytics using Azure services …
book
Azure Data Engineering Cookbook - Second Edition
Nearly 80 recipes to help you collect and transform data from multiple sources into a single …