Chapter 7. Responsible AI Development and Use

In the previous chapters, we’ve looked at how to use the key Microsoft cloud AI services. But it’s also important to think about the bigger picture of how you build and use AI, so that you can take advantage of these cloud AI services without running into problems.

AI and machine learning are powerful techniques that can make software more useful, systems more efficient, and people more productive. But they can also violate privacy, create security problems, replicate and amplify bias, automate decision making that has negative consequences for individuals or entire groups—or just be plain wrong on occasion.

Tip

This is a big and complicated topic, and you don’t have to master every nuance to use cloud AI services. Don’t get overwhelmed by all the issues: you don’t need to do everything—but equally, don’t assume that you don’t need to do anything about responsible AI.

The greater the potential of AI—like diagnosing cancer, detecting earthquakes, predicting failures in critical infrastructure, or guiding the visually impaired through an unfamiliar location—the greater the responsibility to get it right as AI expands into areas like healthcare, education, and criminal justice, where the social implications and consequences are significant. But even everyday uses of AI could inadvertently exclude or harm users if the systems aren’t fair, accountable, transparent, and secure.

Large language models like the GPT-3 model behind the Azure ...

Get Azure AI Services at Scale for Cloud, Mobile, and Edge now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.