Data Engineering with AWS

Book description

The missing expert-led manual for the AWS ecosystem — go from foundations to building data engineering pipelines effortlessly Purchase of the print or Kindle book includes a free eBook in the PDF format.

Key Features

  • Learn about common data architectures and modern approaches to generating value from big data
  • Explore AWS tools for ingesting, transforming, and consuming data, and for orchestrating pipelines
  • Learn how to architect and implement data lakes and data lakehouses for big data analytics from a data lakes expert

Book Description

Written by a Senior Data Architect with over twenty-five years of experience in the business, Data Engineering for AWS is a book whose sole aim is to make you proficient in using the AWS ecosystem. Using a thorough and hands-on approach to data, this book will give aspiring and new data engineers a solid theoretical and practical foundation to succeed with AWS.

As you progress, you’ll be taken through the services and the skills you need to architect and implement data pipelines on AWS. You'll begin by reviewing important data engineering concepts and some of the core AWS services that form a part of the data engineer's toolkit. You'll then architect a data pipeline, review raw data sources, transform the data, and learn how the transformed data is used by various data consumers. You’ll also learn about populating data marts and data warehouses along with how a data lakehouse fits into the picture. Later, you'll be introduced to AWS tools for analyzing data, including those for ad-hoc SQL queries and creating visualizations. In the final chapters, you'll understand how the power of machine learning and artificial intelligence can be used to draw new insights from data.

By the end of this AWS book, you'll be able to carry out data engineering tasks and implement a data pipeline on AWS independently.

What you will learn

  • Understand data engineering concepts and emerging technologies
  • Ingest streaming data with Amazon Kinesis Data Firehose
  • Optimize, denormalize, and join datasets with AWS Glue Studio
  • Use Amazon S3 events to trigger a Lambda process to transform a file
  • Run complex SQL queries on data lake data using Amazon Athena
  • Load data into a Redshift data warehouse and run queries
  • Create a visualization of your data using Amazon QuickSight
  • Extract sentiment data from a dataset using Amazon Comprehend

Who this book is for

This book is for data engineers, data analysts, and data architects who are new to AWS and looking to extend their skills to the AWS cloud. Anyone new to data engineering who wants to learn about the foundational concepts while gaining practical experience with common data engineering services on AWS will also find this book useful. A basic understanding of big data-related topics and Python coding will help you get the most out of this book but it’s not a prerequisite. Familiarity with the AWS console and core services will also help you follow along.

Table of contents

  1. Data Engineering with AWS
  2. Contributors
  3. About the author
  4. Additional contributors
  5. About the reviewers
  6. Preface
    1. Who this book is for
    2. What this book covers
    3. To get the most out of this book
    4. Download the example code files
    5. Download the color images
    6. Conventions used
    7. Get in touch
    8. Share Your Thoughts
  7. Section 1: AWS Data Engineering Concepts and Trends
  8. Chapter 1: An Introduction to Data Engineering
    1. Technical requirements
    2. The rise of big data as a corporate asset
    3. The challenges of ever-growing datasets
    4. Data engineers – the big data enablers
      1. Understanding the role of the data engineer
      2. Understanding the role of the data scientist
      3. Understanding the role of the data analyst
      4. Understanding other common data-related roles
    5. The benefits of the cloud when building big data analytic solutions
    6. Hands-on – creating and accessing your AWS account
      1. Creating a new AWS account
      2. Accessing your AWS account
    7. Summary
  9. Chapter 2: Data Management Architectures for Analytics
    1. Technical requirements
    2. The evolution of data management for analytics
      1. Databases and data warehouses
      2. Dealing with big, unstructured data
      3. A lake on the cloud and a house on that lake
    3. Understanding data warehouses and data marts – fountains of truth
      1. Distributed storage and massively parallel processing
      2. Columnar data storage and efficient data compression
      3. Dimensional modeling in data warehouses
      4. Understanding the role of data marts
      5. Feeding data into the warehouse – ETL and ELT pipelines
    4. Building data lakes to tame the variety and volume of big data
      1. Data lake logical architecture
    5. Bringing together the best of both worlds with the lake house architecture
      1. Data lakehouse implementations
      2. Building a data lakehouse on AWS
    6. Hands-on – configuring the AWS Command Line Interface tool and creating an S3 bucket
      1. Installing and configuring the AWS CLI
      2. Creating a new Amazon S3 bucket
    7. Summary
  10. Chapter 3: The AWS Data Engineer's Toolkit
    1. Technical requirements
    2. AWS services for ingesting data
      1. Overview of Amazon Database Migration Service (DMS)
      2. Overview of Amazon Kinesis for streaming data ingestion
      3. Overview of Amazon MSK for streaming data ingestion
      4. Overview of Amazon AppFlow for ingesting data from SaaS services
      5. Overview of Amazon Transfer Family for ingestion using FTP/SFTP protocols
      6. Overview of Amazon DataSync for ingesting from on-premises storage
      7. Overview of the AWS Snow family of devices for large data transfers
    3. AWS services for transforming data
      1. Overview of AWS Lambda for light transformations
      2. Overview of AWS Glue for serverless Spark processing
      3. Overview of Amazon EMR for Hadoop ecosystem processing
    4. AWS services for orchestrating big data pipelines
      1. Overview of AWS Glue workflows for orchestrating Glue components
      2. Overview of AWS Step Functions for complex workflows
      3. Overview of Amazon managed workflows for Apache Airflow
    5. AWS services for consuming data
      1. Overview of Amazon Athena for SQL queries in the data lake
      2. Overview of Amazon Redshift and Redshift Spectrum for data warehousing and data lakehouse architectures
      3. Overview of Amazon QuickSight for visualizing data
    6. Hands-on – triggering an AWS Lambda function when a new file arrives in an S3 bucket
      1. Creating a Lambda layer containing the AWS Data Wrangler library
      2. Creating new Amazon S3 buckets
      3. Creating an IAM policy and role for your Lambda function
      4. Creating a Lambda function
      5. Configuring our Lambda function to be triggered by an S3 upload
    7. Summary
  11. Chapter 4: Data Cataloging, Security, and Governance
    1. Technical requirements
    2. Getting data security and governance right
      1. Common data regulatory requirements
      2. Core data protection concepts
      3. Personal data
      4. Encryption
      5. Anonymized data
      6. Pseudonymized data/tokenization
      7. Authentication
      8. Authorization
      9. Putting these concepts together
    3. Cataloging your data to avoid the data swamp
      1. How to avoid the data swamp
    4. The AWS Glue/Lake Formation data catalog
    5. AWS services for data encryption and security monitoring
      1. AWS Key Management Service (KMS)
      2. Amazon Macie
      3. Amazon GuardDuty
    6. AWS services for managing identity and permissions
      1. AWS Identity and Access Management (IAM) service
      2. Using AWS Lake Formation to manage data lake access
    7. Hands-on – configuring Lake Formation permissions
      1. Creating a new user with IAM permissions
      2. Transitioning to managing fine-grained permissions with AWS Lake Formation
    8. Summary
  12. Section 2: Architecting and Implementing Data Lakes and Data Lake Houses
  13. Chapter 5: Architecting Data Engineering Pipelines
    1. Technical requirements
    2. Approaching the data pipeline architecture
      1. Architecting houses and architecting pipelines
      2. Whiteboarding as an information-gathering tool
      3. Conducting a whiteboarding session
    3. Identifying data consumers and understanding their requirements
    4. Identifying data sources and ingesting data
    5. Identifying data transformations and optimizations
      1. File format optimizations
      2. Data standardization
      3. Data quality checks
      4. Data partitioning
      5. Data denormalization
      6. Data cataloging
      7. Whiteboarding data transformation
    6. Loading data into data marts
    7. Wrapping up the whiteboarding session
    8. Hands-on – architecting a sample pipeline
      1. Detailed notes from the project "Bright Light" whiteboarding meeting of GP Widgets, Inc
    9. Summary
  14. Chapter 6: Ingesting Batch and Streaming Data
    1. Technical requirements
    2. Understanding data sources
      1. Data variety
      2. Data volume
      3. Data velocity
      4. Data veracity
      5. Data value
      6. Questions to ask
    3. Ingesting data from a relational database
      1. AWS Database Migration Service (DMS)
      2. AWS Glue
      3. Other ways to ingest data from a database
      4. Deciding on the best approach for ingesting from a database
    4. Ingesting streaming data
      1. Amazon Kinesis versus Amazon Managed Streaming for Kafka (MSK)
    5. Hands-on – ingesting data with AWS DMS
      1. Creating a new MySQL database instance
      2. Loading the demo data using an Amazon EC2 instance
      3. Creating an IAM policy and role for DMS
      4. Configuring DMS settings and performing a full load from MySQL to S3
      5. Querying data with Amazon Athena
    6. Hands-on – ingesting streaming data
      1. Configuring Kinesis Data Firehose for streaming delivery to Amazon S3
      2. Configuring Amazon Kinesis Data Generator (KDG)
      3. Adding newly ingested data to the Glue Data Catalog
      4. Querying the data with Amazon Athena
    7. Summary
  15. Chapter 7: Transforming Data to Optimize for Analytics
    1. Technical requirements
    2. Transformations – making raw data more valuable
      1. Cooking, baking, and data transformations
      2. Transformations as part of a pipeline
    3. Types of data transformation tools
      1. Apache Spark
      2. Hadoop and MapReduce
      3. SQL
      4. GUI-based tools
    4. Data preparation transformations
      1. Protecting PII data
      2. Optimizing the file format
      3. Optimizing with data partitioning
      4. Data cleansing
    5. Business use case transforms
      1. Data denormalization
      2. Enriching data
      3. Pre-aggregating data
      4. Extracting metadata from unstructured data
    6. Working with change data capture (CDC) data
      1. Traditional approaches – data upserts and SQL views
      2. Modern approaches – the transactional data lake
    7. Hands-on – joining datasets with AWS Glue Studio
      1. Creating a new data lake zone – the curated zone
      2. Creating a new IAM role for the Glue job
      3. Configuring a denormalization transform using AWS Glue Studio
      4. Finalizing the denormalization transform job to write to S3
      5. Create a transform job to join streaming and film data using AWS Glue Studio
    8. Summary
  16. Chapter 8: Identifying and Enabling Data Consumers
    1. Technical requirements
    2. Understanding the impact of data democratization
      1. A growing variety of data consumers
    3. Meeting the needs of business users with data visualization
      1. AWS tools for business users
    4. Meeting the needs of data analysts with structured reporting
      1. AWS tools for data analysts
    5. Meeting the needs of data scientists and ML models
      1. AWS tools used by data scientists to work with data
    6. Hands-on – creating data transformations with AWS Glue DataBrew
      1. Configuring new datasets for AWS Glue DataBrew
      2. Creating a new Glue DataBrew project
      3. Building your Glue DataBrew recipe
      4. Creating a Glue DataBrew job
    7. Summary
  17. Chapter 9: Loading Data into a Data Mart
    1. Technical requirements
    2. Extending analytics with data warehouses/data marts
      1. Cold data
      2. Warm data
      3. Hot data
    3. What not to do – anti-patterns for a data warehouse
      1. Using a data warehouse as a transactional datastore
      2. Using a data warehouse as a data lake
      3. Using data warehouses for real-time, record-level use cases
      4. Storing unstructured data
    4. Redshift architecture review and storage deep dive
      1. Data distribution across slices
      2. Redshift Zone Maps and sorting data
    5. Designing a high-performance data warehouse
      1. Selecting the optimal Redshift node type
      2. Selecting the optimal table distribution style and sort key
      3. Selecting the right data type for columns
      4. Selecting the optimal table type
    6. Moving data between a data lake and Redshift
      1. Optimizing data ingestion in Redshift
      2. Exporting data from Redshift to the data lake
    7. Hands-on – loading data into an Amazon Redshift cluster and running queries
      1. Uploading our sample data to Amazon S3
      2. IAM roles for Redshift
      3. Creating a Redshift cluster
      4. Creating external tables for querying data in S3
      5. Creating a schema for a local Redshift table
      6. Running complex SQL queries against our data
    8. Summary
  18. Chapter 10: Orchestrating the Data Pipeline
    1. Technical requirements
    2. Understanding the core concepts for pipeline orchestration
      1. What is a data pipeline, and how do you orchestrate it?
      2. How do you trigger a data pipeline to run?
      3. How do you handle the failures of a step in your pipeline?
    3. Examining the options for orchestrating pipelines in AWS
      1. AWS Data Pipeline for managing ETL between data sources
      2. AWS Glue Workflows to orchestrate Glue resources
      3. Apache Airflow as an open source orchestration solution
      4. Pros and cons of using MWAA
      5. AWS Step Function for a serverless orchestration solution
      6. Pros and cons of using AWS Step Function
      7. Deciding on which data pipeline orchestration tool to use
    4. Hands-on – orchestrating a data pipeline using AWS Step Function
      1. Creating new Lambda functions
      2. Creating an SNS topic and subscribing to an email address
      3. Creating a new Step Function state machine
      4. Configuring AWS CloudTrail and Amazon EventBridge
    5. Summary
  19. Section 3: The Bigger Picture: Data Analytics, Data Visualization, and Machine Learning
  20. Chapter 11: Ad Hoc Queries with Amazon Athena
    1. Technical requirements
    2. Amazon Athena – in-place SQL analytics for the data lake
    3. Tips and tricks to optimize Amazon Athena queries
      1. Common file format and layout optimizations
      2. Writing optimized SQL queries
    4. Federating the queries of external data sources with Amazon Athena Query Federation
      1. Querying external data sources using Athena Federated Query
    5. Managing governance and costs with Amazon Athena Workgroups
      1. Athena Workgroups overview
      2. Enforcing settings for groups of users
      3. Enforcing data usage controls
    6. Hands-on – creating an Amazon Athena workgroup and configuring Athena settings
    7. Hands-on – switching Workgroups and running queries
    8. Summary
  21. Chapter 12: Visualizing Data with Amazon QuickSight
    1. Technical requirements
    2. Representing data visually for maximum impact
      1. Benefits of data visualization
      2. Popular uses of data visualizations
    3. Understanding Amazon QuickSight's core concepts
      1. Standard versus enterprise edition
      2. SPICE – the in-memory storage and computation engine for QuickSight
    4. Ingesting and preparing data from a variety of sources
      1. Preparing datasets in QuickSight versus performing ETL outside of QuickSight
    5. Creating and sharing visuals with QuickSight analyses and dashboards
      1. Visual types in Amazon QuickSight
    6. Understanding QuickSight's advanced features – ML Insights and embedded dashboards
      1. Amazon QuickSight ML Insights
      2. Amazon QuickSight embedded dashboards
    7. Hands-on – creating a simple QuickSight visualization
      1. Setting up a new QuickSight account and loading a dataset
      2. Creating a new analysis
    8. Summary
  22. Chapter 13: Enabling Artificial Intelligence and Machine Learning
    1. Technical requirements
    2. Understanding the value of ML and AI for organizations
      1. Specialized ML projects
      2. Everyday use cases for ML and AI
    3. Exploring AWS services for ML
      1. AWS ML services
    4. Exploring AWS services for AI
      1. AI for unstructured speech and text
      2. AI for extracting metadata from images and video
      3. AI for ML-powered forecasts
      4. AI for fraud detection and personalization
    5. Hands-on – reviewing reviews with Amazon Comprehend
      1. Setting up a new Amazon SQS message queue
      2. Creating a Lambda function for calling Amazon Comprehend
      3. Adding Comprehend permissions for our IAM role
      4. Adding a Lambda function as a trigger for our SQS message queue
      5. Testing the solution with Amazon Comprehend
    6. Summary
    7. Further reading
  23. Chapter 14: Wrapping Up the First Part of Your Learning Journey
    1. Technical requirements
    2. Looking at the data analytics big picture
      1. Managing complex data environments with DataOps
    3. Examining examples of real-world data pipelines
      1. A decade of data wrapped up for Spotify users
      2. Ingesting and processing streaming files at Netflix scale
    4. Imagining the future – a look at emerging trends
      1. ACID transactions directly on data lake data
      2. More data and more streaming ingestion
      3. Multi-cloud
      4. Decentralized data engineering teams, data platforms, and a data mesh architecture
      5. Data and product thinking convergence
      6. Data and self-serve platform design convergence
      7. Implementations of the data mesh architecture
    5. Hands-on – cleaning up your AWS account
      1. Reviewing AWS Billing to identify the resources being charged for
      2. Closing your AWS account
    6. Summary
    7. Why subscribe?
  24. Other Books You May Enjoy
    1. Packt is searching for authors like you
    2. Share Your Thoughts

Product information

  • Title: Data Engineering with AWS
  • Author(s): Gareth Eagar
  • Release date: December 2021
  • Publisher(s): Packt Publishing
  • ISBN: 9781800560413