Chapter 3. Building Your First Distributed Application
Now that you’ve seen the basics of the Ray API in action, let’s build something more realistic with it. By the end of this chapter, you will have built a reinforcement learning (RL) problem from scratch, implemented your first algorithm to tackle it, and used Ray tasks and actors to parallelize this solution to a local cluster—all in less than 250 lines of code.
This chapter is designed to work for readers who don’t have any experience with RL. We’ll work on a straightforward problem and develop the necessary skills to tackle it hands-on. Since Chapter 4 is devoted entirely to this topic, we’ll skip all advanced RL topics and language and just focus on the problem at hand. But even if you’re a quite advanced RL user, you’ll likely benefit from implementing a classic algorithm in a distributed setting.
This is the last chapter working only with Ray Core. We hope you learn to appreciate how powerful and flexible it is and how quickly you can implement distributed experiments that would otherwise take considerable efforts to scale.
Before we jump into any implementation, let’s quickly talk about the paradigm of RL in a bit more detail. Feel free to skip this section if you’ve worked with RL before.
Introducing Reinforcement Learning
One of my (Max’s) favorite mobile apps can automatically classify or “label” individual plants in our garden. The app works by simply showing it a picture of the plant in question. That’s immensely ...
Get Learning Ray now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.