Appendix B. Installing and Deploying Ray
The power of Ray is in its support for various deployment models, ranging from a single-node deployment—allowing you to experiment with Ray locally—to clusters containing thousands of machines. The same code developed on the local Ray installation can run on the entire spectrum of Ray’s installations. In this appendix, we will show some of the installation options that we evaluated while writing this book.
Installing Ray Locally
The simplest Ray installation is done locally with pip
. Use the following command:
pip install -U ray
This command installs all the code required to run local Ray programs or launch programs on a Ray cluster (see “Using Ray Clusters”). The command installs the latest official release. In addition, it is possible to install Ray from daily releases or a specific commit. It is also possible to install Ray inside the Conda environment. Finally, you can build Ray from the source by following the instructions in the Ray documentation.
Using Ray Docker Images
In addition to natively installing on your local machine, Ray provides an option for running the provided Docker image. The Ray project provides a wealth of Docker images built for various versions of Python and hardware options. These images can be used to execute Ray’s code by starting a corresponding Ray image:
docker run --rm --shm-size=<shm-size> -t -i <image name>
Here <shm-size>
is the memory that Ray uses internally for its object store. A good estimate ...
Get Scaling Python with Ray now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.