Chapter 7. Attack Patterns for Real-World Systems

In this chapter we explore the various attack patterns that could be used to generate adversarial input, taking into account the attacker’s goals and capabilities. These patterns exploit the methods described in Chapter 6, and as we will see, the selected approach will depend on factors such as the access that the adversary has to the target to test and develop adversarial input and their knowledge of the target model and processing chain. We’ll also consider whether an adversarial perturbation or an adversarial patch could be reused across different image or audio files.

Attack Patterns

Chapter 6 considered different techniques for generating adversarial examples. These methods have been proven in a “laboratory” environment, but how do they play out in real-world scenarios where the adversary has limited knowledge of or access to the target model and broader system? Creating adversarial input that is effective in a real-world scenario will pose a significant challenge to any attacker.

There are several different patterns that might be exploited to generate adversarial input and subsequently launch an attack. These patterns vary in terms of complexity and the resources needed to generate adversarial examples. In addition, some approaches require greater knowledge of, or access to, the target system han others. The pattern selected may also depend upon the required robustness and covertness of the attack.

Broadly speaking, we can ...

Get Strengthening Deep Neural Networks now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.