In-Memory Analytics with Apache Arrow

Book description

Process tabular data and build high-performance query engines on modern CPUs and GPUs using Apache Arrow, a standardized language-independent memory format, for optimal performance

Key Features

  • Learn about Apache Arrow's data types and interoperability with pandas and Parquet
  • Work with Apache Arrow Flight RPC, Compute, and Dataset APIs to produce and consume tabular data
  • Reviewed, contributed, and supported by Dremio, the co-creator of Apache Arrow

Book Description

Apache Arrow is designed to accelerate analytics and allow the exchange of data across big data systems easily.

In-Memory Analytics with Apache Arrow begins with a quick overview of the Apache Arrow format, before moving on to helping you to understand Arrow’s versatility and benefits as you walk through a variety of real-world use cases. You'll cover key tasks such as enhancing data science workflows with Arrow, using Arrow and Apache Parquet with Apache Spark and Jupyter for better performance and hassle-free data translation, as well as working with Perspective, an open source interactive graphical and tabular analysis tool for browsers. As you advance, you'll explore the different data interchange and storage formats and become well-versed with the relationships between Arrow, Parquet, Feather, Protobuf, Flatbuffers, JSON, and CSV. In addition to understanding the basic structure of the Arrow Flight and Flight SQL protocols, you'll learn about Dremio’s usage of Apache Arrow to enhance SQL analytics and discover how Arrow can be used in web-based browser apps. Finally, you'll get to grips with the upcoming features of Arrow to help you stay ahead of the curve.

By the end of this book, you will have all the building blocks to create useful, efficient, and powerful analytical services and utilities with Apache Arrow.

What you will learn

  • Use Apache Arrow libraries to access data files both locally and in the cloud
  • Understand the zero-copy elements of the Apache Arrow format
  • Improve read performance by memory-mapping files with Apache Arrow
  • Produce or consume Apache Arrow data efficiently using a C API
  • Use the Apache Arrow Compute APIs to perform complex operations
  • Create Arrow Flight servers and clients for transferring data quickly
  • Build the Arrow libraries locally and contribute back to the community

Who this book is for

This book is for developers, data analysts, and data scientists looking to explore the capabilities of Apache Arrow from the ground up. This book will also be useful for any engineers who are working on building utilities for data analytics and query engines, or otherwise working with tabular data, regardless of the programming language. Some familiarity with basic concepts of data analysis will help you to get the most out of this book but isn't required. Code examples are provided in the C++, Go, and Python programming languages.

Table of contents

  1. In-Memory Analytics with Apache Arrow
  2. Foreword
  3. Acknowledgments
  4. Contributors
  5. About the author
  6. About the reviewers
  7. Preface
    1. Who this book is for
    2. To get the most out of this book
    3. Download the example code files
    4. Download the color images
    5. Conventions used
    6. Get in touch
    7. Share Your Thoughts
  8. Section 1: Overview of What Arrow Is, its Capabilities, Benefits, and Goals
  9. Chapter 1: Getting Started with Apache Arrow
    1. Technical requirements
    2. Understanding the Arrow format and specifications
    3. Why does Arrow use a columnar in-memory format?
    4. Learning the terminology and physical memory layout
      1. Quick summary of physical layouts, or TL;DR
      2. How to speak Arrow
    5. Arrow format versioning and stability
    6. Would you download a library? Of course!
    7. Setting up your shooting range
      1. Using pyarrow For Python
      2. C++ for the 1337 coders
      3. Go Arrow go!
    8. Summary
    9. References
  10. Chapter 2: Working with Key Arrow Specifications
    1. Technical requirements
    2. Playing with data, wherever it might be!
      1. Working with Arrow tables
      2. Accessing data files with pyarrow
      3. Accessing data files with Arrow in C++
    3. pandas firing Arrow
      1. Putting pandas in your quiver
      2. Making pandas run fast
      3. Keeping pandas from running wild
    4. Sharing is caring… especially when it's your memory
      1. Diving into memory management
      2. Managing buffers for performance
      3. Crossing the boundaries
    5. Summary
  11. Chapter 3: Data Science with Apache Arrow
    1. Technical requirements
    2. ODBC takes an Arrow to the knee
    3. Lost in translation
    4. SPARKing new ideas on Jupyter
      1. Understanding the integration
      2. Everyone gets a containerized development environment!
      3. SPARKing joy with Arrow and PySpark
    5. Interactive charting powered by Arrow
    6. Stretching workflows onto Elasticsearch
      1. Indexing the data
    7. Summary
  12. Section 2: Interoperability with Arrow: pandas, Parquet, Flight, and Datasets
  13. Chapter 4: Format and Memory Handling
    1. Technical requirements
    2. Storage versus runtime in-memory versus message-passing formats
      1. Long-term storage formats
      2. In-memory runtime formats
      3. Message-passing formats
      4. Summing up
    3. Passing your Arrows around
      1. What is this sorcery?!
      2. Producing and consuming Arrows
    4. Learning about memory cartography
      1. The base case
      2. Parquet versus CSV
      3. Mapping data into memory
      4. Too long; didn't read (TL;DR) – Computers are magic
    5. Summary
  14. Chapter 5: Crossing the Language Barrier with the Arrow C Data API
    1. Technical requirements
    2. Using the Arrow C data interface
      1. The ArrowSchema structure
      2. The ArrowArray structure
    3. Example use cases
      1. Using the C Data API to export Arrow-formatted data
      2. Importing Arrow data with Python
      3. Exporting Arrow data with the C Data API from Python to Go
    4. Streaming across the C Data API
      1. Streaming record batches from Python to Go
    5. Other use cases
      1. Some exercises
    6. Summary
  15. Chapter 6: Leveraging the Arrow Compute APIs
    1. Technical requirements
    2. Letting Arrow do the work for you
      1. Input shaping
      2. Value casting
      3. Types of functions
    3. Executing compute functions
      1. Using the C++ compute library
      2. Using the compute library in Python
    4. Picking the right tools
      1. Adding a constant value to an array
    5. Summary
  16. Chapter 7: Using the Arrow Datasets API
    1. Technical requirements
    2. Querying multifile datasets
      1. Creating a sample dataset
      2. Discovering dataset fragments
    3. Filtering data programmatically
      1. Expressing yourself – a quick detour
      2. Using expressions for filtering data
      3. Deriving and renaming columns (projecting)
    4. Using the Datasets API in Python
      1. Creating our sample dataset
      2. Discovering the dataset
      3. Using different file formats
      4. Filtering and projecting columns with Python
    5. Streaming results
      1. Working with partitioned datasets
    6. Summary
  17. Chapter 8: Exploring Apache Arrow Flight RPC
    1. Technical requirements
    2. The basics and complications of gRPC
      1. Building modern APIs for data
      2. Efficiency and streaming are important
    3. Arrow Flight's building blocks
      1. Horizontal scalability with Arrow Flight
      2. Adding your business logic to Flight
      3. Other bells and whistles
      4. Understanding the Flight Protocol Buffer definitions
    4. Using Flight, choose your language!
      1. Building a Python Flight Server
      2. Building a Go Flight server
    5. What is Flight SQL?
      1. Setting up a performance test
      2. Running the performance test
      3. Flight SQL, the new kid on the block
    6. Summary
  18. Section 3: Real-World Examples, Use Cases, and Future Development
  19. Chapter 9: Powered by Apache Arrow
    1. Swimming in data with Dremio Sonar
      1. Clarifying Dremio Sonar's architecture
      2. The library of the Gods…of data analysis
    2. Spicing up your ML workflows
      1. Bringing the AI engine to where the data lives
    3. Arrow in the browser using JavaScript
      1. Gaining a little perspective
      2. Taking flight with Falcon
    4. Summary
  20. Chapter 10: How to Leave Your Mark on Arrow
    1. Technical requirements
    2. Contributing to open source projects
      1. Communication is key
      2. You don't necessarily have to contribute code
      3. There are a lot of reasons why you should contribute!
    3. Preparing your first pull request
      1. Navigating JIRA
      2. Setting up Git
      3. Orienting yourself in the code base
      4. Building the Arrow libraries
      5. Creating the PR
      6. Understanding the CI configuration
      7. Development using Archery
    4. Find your interest and expand on it
    5. Getting that sweet, sweet approval
    6. Finishing up with style!
      1. C++ styling
      2. Python code styling
      3. Go code styling
    7. Summary
  21. Chapter 11: Future Development and Plans
    1. Examining Flight SQL (redux)
      1. Why Flight SQL?
      2. Defining the Flight SQL protocol
    2. Firing a Ballista using Data(Fusion)
      1. What about Spark?
      2. Looking at Ballista's development roadmap
    3. Building a cross-language compute serialization
      1. Why Substrait?
      2. Working with Substrait serialization
      3. Getting involved with Substrait development
    4. Final words
    5. Why subscribe?
  22. Other Books You May Enjoy
    1. Packt is searching for authors like you
    2. Share Your Thoughts

Product information

  • Title: In-Memory Analytics with Apache Arrow
  • Author(s): Matthew Topol
  • Release date: June 2022
  • Publisher(s): Packt Publishing
  • ISBN: 9781801071031