Chapter 65. Use Model-Agnostic Explanations for Finding Bias in Black-Box Models

Yiannis Kanellopoulos and Andreas Messalas

The need to shed light on the opacity of “black-box” models is evident: Articles 15 and 22 of the EU’s General Data Protection Regulation (2018), the OECD Principles on Artificial Intelligence (2019), and the US Senate’s proposed Algorithmic Accountability Act are some examples indicating that machine learning interpretability, along with machine learning accountability and fairness, has already (or should) become an integral characteristic for any application that makes automated decisions.

Since many organizations will be obliged to provide explanations about the decisions of their automated models, there will be a huge need for third-party organizations to assess interpretability, as this provides an additional level of integrity and objectivity to the whole audit process. Moreover, some organizations (especially start-ups) won’t have the resources to deal with interpretability issues, rendering third-party auditors necessary.

In this manner, however, intellectual property issues arise, since organizations will not want to disclose any information about the details of their models. Therefore, among the wide ...

Get 97 Things About Ethics Everyone in Data Science Should Know now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.