A Data node's primary role in a Hadoop cluster is to store data, and the jobs are executed as tasks on these nodes. The tasks are scheduled in a way that the batch job processing is done near the data by allocating tasks to those nodes which would be having the data for processing in most certainty. This also ensures that the batch jobs are optimized from execution perspectives and are performant with near data processing.
Please see the details and inner working of a typical Hadoop batch process here:
Figure 06: MapReduce in action
Here, we see that the job, when initiated, is divided into a number of mapper ...