Book description
Get up to speed with Dataproc, the fully managed and highly scalable service for running open source big data tools and frameworks, including Hadoop, Spark, Flink, and Presto. This cookbook shows data engineers, data scientists, data analysts, and cloud architects how to use Dataproc, integrated with Google Cloud, for data lake modernization, ETL, and secure data science at a fraction of the cost.
Narasimha Sadineni from Google and former Googler Anu Venkataraman show you how to set up and run Hadoop and Spark jobs on Dataproc. You'll learn how to create Dataproc clusters and run data engineering and data science workloads in long-running, ephemeral, and serverless ways. In the process, you'll gain an understanding of Dataproc, orchestration, logging and monitoring, Spark History Server, and migration patterns.
This cookbook includes hands-on examples for configuring, logging, securing clusters, and migrating from on-prem to Dataproc. You'll learn how to:
- Create Dataproc clusters on Compute Engine and Kubernetes Engine
- Run data science workloads on Dataproc
- Execute Spark jobs on Dataproc Serverless
- Optimize Dataproc clusters to be cost effective and performant
- Monitor Spark jobs in various ways
- Orchestrate various workloads and activities
- Use different methods for migrating data and workloads from existing Hadoop clusters to Dataproc
Publisher resources
Table of contents
- Brief Table of Contents (Not Yet Final)
-
1. Creating Dataproc Cluster
- 1.1. Installing Google Cloud CLI
- 1.2. Granting IAM privileges to user
- 1.3. Configuring a Network and Firewall rules
- 1.4. Create Dataproc Cluster from UI
- 1.5. Creating Dataproc Cluster using gcloud
- 1.6. Creating Dataproc Cluster using API Endpoints
- 1.7. Creating Dataproc Cluster using Terraform
- 1.8. Creating cluster using Python
- 1.9. Duplicating a Dataproc Cluster
-
2. Running Hive/Spark/Sqoop Workloads
- 2.1. Adding Required Privileges for Jobs
- 2.2. Generating 1TB of Data using a MapReduce job
- 2.3. Running a Hive job to Show Records from Employee Table
- 2.4. Converting XML Data to Parquet Using Scala Spark on Dataproc
- 2.5. Converting XML data to Parquet using PySpark on Dataproc
- 2.6. Submitting a SparkR Job
- 2.7. Migrating Data from Cloud SQL to Hive Using Sqoop Job
- 2.8. Choosing Deploy Modes When Submitting a Spark Job to Dataproc
-
3. Advanced Dataproc Cluster Configuration
- 3.1. Creating an Auto Scaling Policy
- 3.2. Attaching an Auto Scaling Policy to a Dataproc Cluster
- 3.3. Optimizing Cluster Costs with a Mixed On-Demand and Spot Instance Auto Scaling Policy
- 3.4. Adding Local SSDs to Dataproc Worker Nodes
- 3.5. Creating a Cluster with a Custom Image
- 3.6. Building a Cluster with Custom Machine Types
- 3.7. Bootstrapping Dataproc Clusters with Initialization Scripts
- 3.8. Scheduling Automatic Deletion of Unused Clusters
- 3.9. Overriding Hadoop Configurations
-
4. Serverless Spark and Ephemeral Dataproc Clusters
- 4.1. Running on Dataproc: Serverless vs Ephemeral Clusters
- 4.2. Run Sequence of Jobs on Ephemeral Cluster
- 4.3. Executing a Spark Batch Job to Convert XML Data to Parquet on Dataproc Serverless
- 4.4. Running a Serverless Job Using Premium Tier Configuration
- 4.5. Giving a Unique Custom Name to a Dataproc Serverless Spark Job
- 4.6. Clone a Dataproc Serverless Spark Job
- 4.7. Run a Serverless Job on Spark RAPIDS Accelerator
- 4.8. How to Configure a Spark History Server
- 4.9. Writing Spark Events to the Spark History Server from Dataproc Serverless
- 4.10. Monitoring of Serverless Spark jobs
- 4.11. Calculate the Price of a Serverless Batch
-
5. Dataproc Metastore
- 5.1. Creating a Dataproc Metastore Service Instance
- 5.2. 6.2 Attaching a DPMS Instance to One or More Clusters
- 5.3. Creating Tables and Verifying Metadata in DPMS
- 5.4. Installing an Open Source Hive Metastore
- 5.5. Attaching an External Apache Hive Metastore to the Cluster
- 5.6. Searching for Metadata in a Dataplex Data Catalog
- 5.7. Automate the Backup of a DPMS Instance
-
6. Dataproc Security
- 6.1. Managing Identities in Dataproc Clusters
- 6.2. Securing Your Perimeter Using VPC Service Controls
- 6.3. Authenticating Using Kerberos
- 6.4. Installing Ranger
- 6.5. Securing Cluster Resources Using Ranger
- 6.6. Managing Credentials in the Google Cloud Environment
- 6.7. Enforcing Restriction Across All Clusters
- 6.8. Tokenizing Sensitive Data
- About the Authors
Product information
- Title: Dataproc Cookbook
- Author(s):
- Release date: June 2025
- Publisher(s): O'Reilly Media, Inc.
- ISBN: 9781098157708
You might also like
article
Run Llama-2 Models
Llama is Meta’s answer to the growing demand for LLMs. Unlike its well-known technological relative, ChatGPT, …
article
Reinventing the Organization for GenAI and LLMs
Previous technology breakthroughs did not upend organizational structure, but generative AI and LLMs will. We now …
article
Use Github Copilot for Prompt Engineering
Using GitHub Copilot can feel like magic. The tool automatically fills out entire blocks of code--but …
article
Splitting Strings on Any of Multiple Delimiters
Build your knowledge of Python with this Shortcuts collection. Focusing on common problems involving text manipulation, …