Part 2.

Part 2 teaches how to use two popular open source distributed computing frameworks: Hadoop and Spark. Hadoop is the originator and foundation of contemporary distributed computing. We’ll explore how to use Hadoop streaming and how to write Hadoop jobs with the mrjob library. We’ll also learn Spark, a modern distributed computing framework that can take full advantage of the latest, high-memory compute resources. You can use the tools and techniques in this part for large data in categories 2 and 3: tasks that needs parallelization to finish in a reasonable amount of time.

Get Mastering Large Datasets with Python now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.