GPU-accelerated Numba on Python

Numba can be implemented on both NVIDIA CUDA and AMD ROCm GPUs, though the latter is still under experimentation:

Let's understand them in brief before we move on to the installation process in the next section:

  • Numba CUDA: NVIDIA CUDA GPU computing with Numba can be implemented by directly compiling a restricted subset of Python code into CUDA kernels and device functions. The Compute Unified Device Architecture (CUDA) execution model is the foundation of Numba CUDA.
  • Numba ROCm: AMD ROC GPU computing with Numba can be implemented by directly compiling a restricted subset of Python code into HSA kernels and ...

Get Hands-On GPU Computing with Python now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.