Book description
Over 70 recipes to help you use Apache Spark as your single big data computing platform and master its libraries
About This Book
- This book contains recipes on how to use Apache Spark as a unified compute engine
- Cover how to connect various source systems to Apache Spark
- Covers various parts of machine learning including supervised/unsupervised learning & recommendation engines
Who This Book Is For
This book is for data engineers, data scientists, and those who want to implement Spark for real-time data processing. Anyone who is using Spark (or is planning to) will benefit from this book. The book assumes you have a basic knowledge of Scala as a programming language.
What You Will Learn
- Install and configure Apache Spark with various cluster managers & on AWS
- Set up a development environment for Apache Spark including Databricks Cloud notebook
- Find out how to operate on data in Spark with schemas
- Get to grips with real-time streaming analytics using Spark Streaming & Structured Streaming
- Master supervised learning and unsupervised learning using MLlib
- Build a recommendation engine using MLlib
- Graph processing using GraphX and GraphFrames libraries
- Develop a set of common applications or project types, and solutions that solve complex big data problems
In Detail
While Apache Spark 1.x gained a lot of traction and adoption in the early years, Spark 2.x delivers notable improvements in the areas of API, schema awareness, Performance, Structured Streaming, and simplifying building blocks to build better, faster, smarter, and more accessible big data applications. This book uncovers all these features in the form of structured recipes to analyze and mature large and complex sets of data.
Starting with installing and configuring Apache Spark with various cluster managers, you will learn to set up development environments. Further on, you will be introduced to working with RDDs, DataFrames and Datasets to operate on schema aware data, and real-time streaming with various sources such as Twitter Stream and Apache Kafka. You will also work through recipes on machine learning, including supervised learning, unsupervised learning & recommendation engines in Spark.
Last but not least, the final few chapters delve deeper into the concepts of graph processing using GraphX, securing your implementations, cluster optimization, and troubleshooting.
Style and approach
This book is packed with intuitive recipes supported with line-by-line explanations to help you understand Spark 2.x's real-time processing capabilities and deploy scalable big data solutions. This is a valuable resource for data scientists and those working on large-scale data projects.
Table of contents
- www.PacktPub.com
- Preface
-
Getting Started with Apache Spark
- Introduction
- Leveraging Databricks Cloud
- Deploying Spark using Amazon EMR
- Installing Spark from binaries
- Building the Spark source code with Maven
- Launching Spark on Amazon EC2
- Deploying Spark on a cluster in standalone mode
- Deploying Spark on a cluster with Mesos
- Deploying Spark on a cluster with YARN
- Understanding SparkContext and SparkSession
- Understanding resilient distributed dataset - RDD
-
Developing Applications with Spark
- Introduction
- Exploring the Spark shell
- Developing a Spark applications in Eclipse with Maven
- Developing a Spark applications in Eclipse with SBT
- Developing a Spark application in IntelliJ IDEA with Maven
- Developing a Spark application in IntelliJ IDEA with SBT
- Developing applications using the Zeppelin notebook
- Setting up Kerberos to do authentication
- Enabling Kerberos authentication for Spark
-
Spark SQL
- Understanding the evolution of schema awareness
- Understanding the Catalyst optimizer
- Inferring schema using case classes
- Programmatically specifying the schema
- Understanding the Parquet format
- Loading and saving data using the JSON format
- Loading and saving data from relational databases
- Loading and saving data from an arbitrary source
- Understanding joins
- Analyzing nested structures
- Working with External Data Sources
- Spark Streaming
- Getting Started with Machine Learning
- Supervised Learning with MLlib — Regression
- Supervised Learning with MLlib — Classification
- Unsupervised Learning
- Recommendations Using Collaborative Filtering
- Graph Processing Using GraphX and GraphFrames
- Optimizations and Performance Tuning
Product information
- Title: Apache Spark 2.x Cookbook
- Author(s):
- Release date: May 2017
- Publisher(s): Packt Publishing
- ISBN: 9781787127265
You might also like
book
Mastering Apache Spark 2.x - Second Edition
Advanced analytics on your Big Data with latest Apache Spark 2.x About This Book An advanced …
video
Hadoop and Spark Fundamentals
9+ Hours of Video Instruction The perfect (and fast) way to get started with Hadoop and …
book
Apache Spark 2: Data Processing and Real-Time Analytics
Build efficient data flow and machine learning programs with this flexible, multi-functional open-source cluster-computing framework Key …
book
Practical Apache Spark: Using the Scala API
Work with Apache Spark using Scala to deploy and set up single-node, multi-node, and high-availability clusters. …