Chapter 3. Processes: Taming the Wild West of Machine Learning Workflows
“AI is in this critical moment where humankind is trying to decide whether this technology is good for us or not.”
Been Kim
Despite its long-term promise, ML is likely overhyped today just like other forms of AI have been in the past (see, for example, the first and second AI winters). Hype, cavalier attitudes, and lax regulatory oversight in the US have led to sloppy ML system implementations that frequently cause discrimination and privacy harms. Yet, we know that, at its core, ML is software. To help avoid failures in the future, all the documentation, testing, managing, and monitoring that organizations do with their existing software assets should be done with their ML projects, too. And that’s just the beginning. Organizations also have to consider the specific risks for ML: discrimination, privacy harms, security vulnerabilities, drift toward failure, and unstable results. After introducing these primary drivers of AI incidents and proposing some lower-level process solutions, this chapter touches on the emergent issues of legal liability and compliance. We then offer higher-level risk mitigation proposals related to model governance, AI incident response plans, organizational ML principles, and corporate social responsibility (CSR). While this chapter focuses on ways organizations can update their processes to better address special risk considerations for ML, remember that ML needs basic software ...
Get Responsible Machine Learning now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.