Book description
Integrating data from multiple sources is essential in the age of big data, but it can be a challenging and time-consuming task. This handy cookbook provides dozens of ready-to-use recipes for using Apache Sqoop, the command-line interface application that optimizes data transfers between relational databases and Hadoop.
Sqoop is both powerful and bewildering, but with this cookbook’s problem-solution-discussion format, you’ll quickly learn how to deploy and then apply Sqoop in your environment. The authors provide MySQL, Oracle, and PostgreSQL database examples on GitHub that you can easily adapt for SQL Server, Netezza, Teradata, or other relational systems.
- Transfer data from a single database table into your Hadoop ecosystem
- Keep table data and Hadoop in sync by importing data incrementally
- Import data from more than one database table
- Customize transferred data by calling various database functions
- Export generated, processed, or backed-up data from Hadoop to your database
- Run Sqoop within Oozie, Hadoop’s specialized workflow scheduler
- Load data into Hadoop’s data warehouse (Hive) or database (HBase)
- Handle installation, connection, and syntax issues common to specific database vendors
Publisher resources
Table of contents
- Apache Sqoop Cookbook
- Foreword
- Preface
- 1. Getting Started
- 2. Importing Data
- 3. Incremental Import
- 4. Free-Form Query Import
- 5. Export
-
6. Hadoop Ecosystem Integration
- Scheduling Sqoop Jobs with Oozie
- Specifying Commands in Oozie
- Using Property Parameters in Oozie
- Installing JDBC Drivers in Oozie
- Importing Data Directly into Hive
- Using Partitioned Hive Tables
- Replacing Special Delimiters During Hive Import
- Using the Correct NULL String in Hive
- Importing Data into HBase
- Importing All Rows into HBase
- Improving Performance When Importing into HBase
-
7. Specialized Connectors
- Overriding Imported boolean Values in PostgreSQL Direct Import
- Importing a Table Stored in Custom Schema in PostgreSQL
- Exporting into PostgreSQL Using pg_bulkload
- Connecting to MySQL
- Using Direct MySQL Import into Hive
- Using the upsert Feature When Exporting into MySQL
- Importing from Oracle
- Using Synonyms in Oracle
- Faster Transfers with Oracle
- Importing into Avro with OraOop
- Choosing the Proper Connector for Oracle
- Exporting into Teradata
- Using the Cloudera Teradata Connector
- Using Long Column Names in Teradata
- About the Authors
- Colophon
- Copyright
Product information
- Title: Apache Sqoop Cookbook
- Author(s):
- Release date: July 2013
- Publisher(s): O'Reilly Media, Inc.
- ISBN: 9781449364588
You might also like
book
Apache Oozie
Get a solid grounding in Apache Oozie, the workflow scheduler system for managing Hadoop jobs. With …
video
Mastering Apache Sqoop
In this Mastering Apache Sqoop training course, expert author David Yahalom teaches you everything you need …
book
Apache Hive Cookbook
Easy, hands-on recipes to help you understand Hive and its integration with frameworks that are used …
video
Introduction to Apache NiFi (Hortonworks DataFlow - HDF 2.0)
Apache NiFi was initially used by the NSA so they could move data at scale and …