Book description
Security isn't considered a high priority when it comes to machine learning systems. But given the speed of innovation in this area, the rapid advances in ML present a whole new set of security risks that are quite different from those of traditional software. This report reviews known security risks for ML systems and examines why security in this area is particularly important today.
Catherine Nelson, principal data scientist at SAP Concur, describes techniques to enhance security, increase privacy, and mitigate attacks that do occur on ML systems. By defining what's meant by secure, she examines whether the techniques now available are sufficient to achieve true security in ML systems. This report is ideal for ML engineers, data scientists, and managers of ML teams.
- Learn key points in the machine learning lifecycle when security becomes particularly important
- Get an overview of known security risks, including transfer learning, model theft, model inversion, and membership inference attacks
- Mitigate security risks using audits and governance, model monitoring, data checks and balances, and general security practice
Product information
- Title: Is Building Secure ML Possible?
- Author(s):
- Release date: October 2021
- Publisher(s): O'Reilly Media, Inc.
- ISBN: 9781098107321
You might also like
book
Age-Period-Cohort Analysis
This book explores the ways in which statistical models, methods, and research designs can be used …
book
Who Stole My Customer??: Winning Strategies for Creating and Sustaining Customer Loyalty, Second Edition
Rebuild customer loyalty, strengthen customer relationships, and leverage the immense power of customer co-innovation! Harvey Thompson's …
book
Responsible Machine Learning
Like other powerful technologies, AI and machine learning present significant opportunities. To reap the full benefits …
book
Practicing Trustworthy Machine Learning
With the increasing use of AI in high-stakes domains such as medicine, law, and defense, organizations …