Chapter 5. Architecting for Data Reliability
Airbnb, the global online vacation marketplace, wrote in a 2020 post on their engineering blog that “leadership [set] high expectations for data timeliness and quality,” leading to the need to make significant investment in their data quality and governance efforts. Meanwhile, Krishna Puttaswamy and Suresh Srinivas, former engineers at Uber, wrote in a 2021 Uber Engineering blog article that high-quality big data is “at the heart of this massive transformation platform.”
It’s no secret: data quality is top of mind for some of the best data teams. Still, it’s one thing to write about it: how do we actually achieve this in practice?
Data reliability—an organization’s ability to deliver high data availability and health throughout the entire data life cycle—is the outcome of high data quality. As companies ingest more operational and third-party data than ever before, with employees from across the organization interacting with that data at all stages of its life cycle, it’s become increasingly important for that data to be reliable.
Data reliability has to be intentionally built into every level of your organization, from the processes and technologies you leverage to build and manage your data stack to the way you communicate and triage data issues further downstream. In this chapter, we’ll explore how to architect for data reliability at each stage of the pipeline—and data engineering experience.
Measuring and Maintaining High Data ...
Get Data Quality Fundamentals now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.