Chapter 3. The Computational Side of Deep Learning
This chapter explores how computations are performed on the hardware and how acceleration is achieved through hardware advancements. As discussed in Chapter 1, the power of having a clear understanding of what is happening across the stack of your application, spanning the algorithm, software, hardware, and data, is profound. Limitations and trade-offs could surface from anywhere in your stack, and such an understanding empowers you to make careful, optimal decisions and find the right balance while working within your limitations, especially when scaling.
In Chapter 2, you learned about the foundational concepts of deep learning and worked through the software implementations of a couple of basic problems. In this chapter, you will dive into the details of how that software interacts with hardware. We’ll cover the fundamentals of computation units and specialized hardware for accelerated computing, looking at their inner workings and considerations on how to get the best out of these silicon powerhouses. As well as delving into the foundational concepts of computer architecture and the accelerated computing landscape, we’ll examine the implications of scaling on hardware devices.
The field of artificial intelligence, widely recognized as being established in 1956 during a workshop held at Dartmouth College, requires specialized scaled-up computing infrastructure. Up until the turn of the century, extensive research and development ...
Get Deep Learning at Scale now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.