Chapter 7. How MapReduce Works
In this chapter, we look at how MapReduce in Hadoop works in detail. This knowledge provides a good foundation for writing more advanced MapReduce programs, which we will cover in the following two chapters.
Anatomy of a MapReduce Job Run
You can run a MapReduce job with a single method call: submit()
on a Job
object (you can also call waitForCompletion()
, which submits the job if it
hasn’t been submitted already, then waits for it to finish).[51] This method call conceals a great deal of processing behind
the scenes. This section uncovers the steps Hadoop takes to run a
job.
The whole process is illustrated in Figure 7-1. At the highest level, there are five independent entities:[52]
The client, which submits the MapReduce job.
The YARN resource manager, which coordinates the allocation of compute resources on the cluster.
The YARN node managers, which launch and monitor the compute containers on machines in the cluster.
The MapReduce application master, which coordinates the tasks running the MapReduce job. The application master and the MapReduce tasks run in containers that are scheduled by the resource manager and managed by the node managers.
The distributed filesystem (normally HDFS, covered in Chapter 3), which is used for sharing job files between the other entities.
Job Submission
The submit()
method on Job ...
Get Hadoop: The Definitive Guide, 4th Edition now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.