Chapter 5. Driving Value with Responsible Machine Learning Innovation
“By far, the greatest danger of Artificial Intelligence is that people conclude too early that they understand it.”
Eliezer Yudkowsky
“Why do 87% of data science projects never make it into production?” asks a recent VentureBeat article. For many companies, getting ML models into production is where the rubber meets the road in terms of ML risks. And to many, the entire purpose of building a model is to ultimately deploy it for making live predictions, and anything else is a failure. For others, the ultimate goal of an ML model can simply be ad hoc predictions, valuations, categorizations, or alerts. This short chapter aims to provide an overview of key concepts companies should be aware of as they look to adopt and drive value from ML. Generally, there are much more significant implications for companies looking to make material, corporate decisions based on predictive algorithms, versus simply experimenting or prototyping exploratory ML exercises.
Trust and Risk
For smart organizations adopting AI, there are often two major questions that get asked: “How can I trust this model?” and “How risky is it?” These are critical questions for firms to ask before they put ML models into production. However, the thing to understand is there is a flywheel effect between the answers to these questions. The more you understand an ML system’s risks, the more you can trust it. We often find that executives and leaders ...
Get Responsible Machine Learning now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.