Chapter 5. Distributed Systems and Communications

Part I of this book presented the fundamental concepts of full-stack deep learning, describing the theoretical and technical priors of developing deep learning models efficiently. The first four chapters brought forth the interaction involved between hardware, software, data, and algorithms to materialize deep learning applications. Part II will extend the knowledge you have acquired so far to apply to distributed systems and explore how a fleet of computational devices can be used to scale out model development.

In this chapter, you will learn about the types of distributed systems and their corresponding challenges. You will also learn about the various communication topologies and techniques that exist today to enable deep learning in a distributed setting. To ease the infrastructure entry barrier, this chapter briefly discusses some of the software and frameworks for managing your processes and infrastructure at scale.

This chapter also highlights some of the attractive massive-scale deep learning infrastructure that exists today. Managing infrastructure is quite an involved process and should be owned by experts like DevOps and MLOps. You may choose to skip “Scaling Compute Capacity” if this content isn’t relevant to your particular role. Finally, this chapter introduces the different types of distributed deep learning techniques that are available, to provide context and an overview of the existing patterns for scaling out ...

Get Deep Learning at Scale now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.