Chapter 15. How Spark Runs on a Cluster
Thus far in the book, we focused on Spark’s properties as a programming interface. We have discussed how the structured APIs take a logical operation, break it up into a logical plan, and convert that to a physical plan that actually consists of Resilient Distributed Dataset (RDD) operations that execute across the cluster of machines. This chapter focuses on what happens when Spark goes about executing that code. We discuss this in an implementation-agnostic way—this depends on neither the cluster manager that you’re using nor the code that you’re running. At the end of the day, all Spark code runs the same way.
This chapter covers several key topics:
-
The architecture and components of a Spark Application
-
The life cycle of a Spark Application inside and outside of Spark
-
Important low-level execution properties, such as pipelining
-
What it takes to run a Spark Application, as a segue into Chapter 16.
Let’s begin with the architecture.
The Architecture of a Spark Application
In Chapter 2, we discussed some of the high-level components of a Spark Application. Let’s review those again:
- The Spark driver
-
The driver is the process “in the driver seat” of your Spark Application. It is the controller of the execution of a Spark Application and maintains all of the state of the Spark cluster (the state and tasks of the executors). It must interface with the cluster manager in order to actually get physical resources and launch executors. ...
Get Spark: The Definitive Guide now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.