Appendix B. CUDA
Throughout the book, we’ve mostly been using PyTorch or
tools built on top of it, such as fastai
and Hugging Face transformers. When we first introduced it in this book, we pitched PyTorch as a low-level framework, where you build architectures and write training loops “from scratch” using your knowledge of linear algebra.
But PyTorch may not be the lowest level of abstraction you deal with in machine learning.
PyTorch itself is written in C++, to which the CUDA language is an extension. CUDA is self-described as a “programming model” that allows you to write code for Nvidia GPUs. When writing your C++ code, you include certain functions called “CUDA kernels” that perform a portion of the work on the GPU.
Who’s That Pokémon? CUDA Kernels
A kernel is a function that is compiled for and designed to run on special accelerator hardware like GPUs (graphics processing units), FPGAs (field-programmable gate arrays), and ASICs (application-specific integrated circuits). They are generally written by engineers who are very familiar with the hardware architecture, and are extensively tuned to perform a single task very well, such as matrix multiplication or convolution. CUDA kernels are kernels run on devices that use CUDA—Nvidia’s GPUs and accelerators.
PyTorch and many other deep learning frameworks use a handful of CUDA kernels to implement their backend, and then build a higher-level interface to a language like Python. This allows you to run super-fast, hand-tuned ...
Get Applied Natural Language Processing in the Enterprise now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.