Book description
Data is bigger, arrives faster, and comes in a variety of formatsâ??and it all needs to be processed at scale for analytics or machine learning. But how can you process such varied workloads efficiently? Enter Apache Spark.
Updated to include Spark 3.0, this second edition shows data engineers and data scientists why structure and unification in Spark matters. Specifically, this book explains how to perform simple and complex data analytics and employ machine learning algorithms. Through step-by-step walk-throughs, code snippets, and notebooks, youâ??ll be able to:
- Learn Python, SQL, Scala, or Java high-level Structured APIs
- Understand Spark operations and SQL Engine
- Inspect, tune, and debug Spark operations with Spark configurations and Spark UI
- Connect to data sources: JSON, Parquet, CSV, Avro, ORC, Hive, S3, or Kafka
- Perform analytics on batch and streaming data using Structured Streaming
- Build reliable data pipelines with open source Delta Lake and Spark
- Develop machine learning pipelines with MLlib and productionize models using MLflow
Publisher resources
Table of contents
- Foreword
- Preface
- 1. Introduction to Apache Spark: A Unified Analytics Engine
- 2. Downloading Apache Spark and Getting Started
- 3. Apache Spark’s Structured APIs
- 4. Spark SQL and DataFrames: Introduction to Built-in Data Sources
- 5. Spark SQL and DataFrames: Interacting with External Data Sources
- 6. Spark SQL and Datasets
- 7. Optimizing and Tuning Spark Applications
-
8. Structured Streaming
- Evolution of the Apache Spark Stream Processing Engine
- The Programming Model of Structured Streaming
- The Fundamentals of a Structured Streaming Query
- Streaming Data Sources and Sinks
- Data Transformations
- Stateful Streaming Aggregations
- Streaming Joins
- Arbitrary Stateful Computations
- Performance Tuning
- Summary
-
9. Building Reliable Data Lakes with Apache Spark
- The Importance of an Optimal Storage Solution
- Databases
- Data Lakes
- Lakehouses: The Next Step in the Evolution of Storage Solutions
-
Building Lakehouses with Apache Spark and Delta Lake
- Configuring Apache Spark with Delta Lake
- Loading Data into a Delta Lake Table
- Loading Data Streams into a Delta Lake Table
- Enforcing Schema on Write to Prevent Data Corruption
- Evolving Schemas to Accommodate Changing Data
- Transforming Existing Data
- Auditing Data Changes with Operation History
- Querying Previous Snapshots of a Table with Time Travel
- Summary
- 10. Machine Learning with MLlib
- 11. Managing, Deploying, and Scaling Machine Learning Pipelines with Apache Spark
- 12. Epilogue: Apache Spark 3.0
- Index
- About the Authors
Product information
- Title: Learning Spark, 2nd Edition
- Author(s):
- Release date: July 2020
- Publisher(s): O'Reilly Media, Inc.
- ISBN: 9781492050049
You might also like
book
Learning Go, 2nd Edition
Go has rapidly become the preferred language for building web services. Plenty of tutorials are available …
book
Designing Data-Intensive Applications
Data is at the center of many challenges in system design today. Difficult issues need to …
book
Deep Learning for Coders with fastai and PyTorch
Deep learning is often viewed as the exclusive domain of math PhDs and big tech companies. …
book
High Performance Spark, 2nd Edition
Apache Spark is amazing when everything clicks. But if you haven't seen the performance improvements you …