Book description
Are human decisions less biased than automated ones? AI is increasingly showing up in highly sensitive areas such as healthcare, hiring, and criminal justice. Many people assume that using data to automate decisions would make everything fair, but that’s not the case. In this report, business, analytics, and data science leaders will examine the challenges of defining fairness and reducing unfair bias throughout the machine learning pipeline.
Trisha Mahoney, Kush R. Varshney, and Michael Hind from IBM explain why you need to engage early and authoritatively when building AI you can trust. You’ll learn how your organization should approach fairness and bias, including trade-offs you need to make between model accuracy and model bias. This report also introduces you to AI Fairness 360, an extensible open source toolkit for measuring, understanding, and reducing AI bias.
In this report, you’ll explore:
- Legal, ethical, and trust factors you need to consider when defining fairness for your use case
- Different ways to measure and remove unfair bias, using the most relevant metrics for the particular use case
- How to define acceptable thresholds for model accuracy and unfair model bias
Product information
- Title: AI Fairness
- Author(s):
- Release date: April 2020
- Publisher(s): O'Reilly Media, Inc.
- ISBN: 9781492077657
You might also like
book
The AI Book
Written by prominent thought leaders in the global fintech space, The AI Book aggregates diverse expertise …
book
The AI Ladder
AI may be the greatest opportunity of our time, with the potential to add nearly $16 …
book
The AI Organization
Much in the same way that software transformed business in the past two decades, AI is …
book
AI at the Edge
Edge AI is transforming the way computers interact with the real world, allowing IoT devices to …