Chapter 2. Data Movement
Now that we’ve discussed considerations around storing and modeling data in Hadoop, we’ll move to the equally important subject of moving data between external systems and Hadoop. This includes ingesting data into Hadoop from systems such as relational databases or logs, and extracting data from Hadoop for ingestion into external systems. We’ll spend a good part of this chapter talking about considerations and best practices around data ingestion into Hadoop, and then dive more deeply into specific tools for data ingestion, such as Flume and Sqoop. We’ll then discuss considerations and recommendations for extracting data from Hadoop.
Data Ingestion Considerations
Just as important as decisions around how to store data in Hadoop, which we discussed in Chapter 1, are the architectural decisions on getting that data into your Hadoop cluster. Although Hadoop provides a filesystem client that makes it easy to copy files in and out of Hadoop, most applications implemented on Hadoop involve ingestion of disparate data types from multiple sources and with differing requirements for frequency of ingestion. Common data sources for Hadoop include:
-
Traditional data management systems such as relational databases and mainframes
-
Logs, machine-generated data, and other forms of event data
-
Files being imported from existing enterprise data storage systems
There are a number of factors to take into consideration when you’re importing data into Hadoop from these ...
Get Hadoop Application Architectures now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.