Numba can be implemented on both NVIDIA CUDA and AMD ROCm GPUs, though the latter is still under experimentation:
Let's understand them in brief before we move on to the installation process in the next section:
- Numba CUDA: NVIDIA CUDA GPU computing with Numba can be implemented by directly compiling a restricted subset of Python code into CUDA kernels and device functions. The Compute Unified Device Architecture (CUDA) execution model is the foundation of Numba CUDA.
- Numba ROCm: AMD ROC GPU computing with Numba can be implemented by directly compiling a restricted subset of Python code into HSA kernels and ...