For a long time, the programming community has known that programming with threads and locks is hard. It often requires an inordinate degree of expertise even for simple problems and leads to programs that have faults that are hard to diagnose. Still, threads and locks are general enough to express everything we might need to write, from parallel image processors to concurrent web servers, and there is an undeniable benefit in having a single general API. However, if we want to make programming concurrent and parallel software easier, we need to embrace the idea that different problems require different tools; a single tool just doesn’t cut it. Image processing is naturally expressed in terms of parallel array operations, whereas threads are a good fit in the case of a concurrent web server.
So in Haskell, we aim to provide the right tool for the job, for as many jobs as possible. If a job is found for which Haskell doesn’t have the right tool, then we try to find a way to build it. The inevitable downside of this diversity is that there is a lot to learn, and that is what this book is all about. In this book, I’ll discuss how to write parallel and concurrent programs in Haskell, ranging from the simple uses of parallelism to speed up computation-heavy programs to the use of lightweight threads for writing high-speed concurrent network servers. Along the way, we’ll see how to use Haskell to write programs that run on the powerful processor in a modern graphics card (GPU), and to write programs that can run on multiple machines in a network (distributed programming).
That is not to say that I plan to cover every experimental programming model that has sprung up; if you peruse the packages on Hackage, you’ll encounter a wide variety of libraries for parallel and concurrent programming, many of which were built to scratch a particular itch, not to mention all the research projects that aren’t ready for real-world use yet. In this book I’m going to focus on the APIs that can be used right now to get work done and are stable enough to rely upon in production. Furthermore, my aim is to leave you with a firm grasp of how the lowest layers work, so that you can build your own abstractions on top of them if you should need to.
In many fields, the words parallel and concurrent are synonyms; not so in programming, where they are used to describe fundamentally different concepts.
A parallel program is one that uses a multiplicity of computational hardware (e.g., several processor cores) to perform a computation more quickly. The aim is to arrive at the answer earlier, by delegating different parts of the computation to different processors that execute at the same time.
By contrast, concurrency is a program-structuring technique in which there are multiple threads of control. Conceptually, the threads of control execute “at the same time”; that is, the user sees their effects interleaved. Whether they actually execute at the same time or not is an implementation detail; a concurrent program can execute on a single processor through interleaved execution or on multiple physical processors.
While parallel programming is concerned only with efficiency, concurrent programming is concerned with structuring a program that needs to interact with multiple independent external agents (for example, the user, a database server, and some external clients). Concurrency allows such programs to be modular; the thread that interacts with the user is distinct from the thread that talks to the database. In the absence of concurrency, such programs have to be written with event loops and callbacks, which are typically more cumbersome and lack the modularity that threads offer.
The notion of “threads of control” does not make sense in a purely
functional program, because there are no effects to observe, and the
evaluation order is irrelevant. So concurrency is a structuring
technique for effectful code; in Haskell, that means code in the IO
monad.
A related distinction is between deterministic and nondeterministic programming models. A deterministic programming model is one in which each program can give only one result, whereas a nondeterministic programming model admits programs that may have different results, depending on some aspect of the execution. Concurrent programming models are necessarily nondeterministic because they must interact with external agents that cause events at unpredictable times. Nondeterminism has some notable drawbacks, however: Programs become significantly harder to test and reason about.
For parallel programming, we would like to use deterministic programming models if at all possible. Since the goal is just to arrive at the answer more quickly, we would rather not make our program harder to debug in the process. Deterministic parallel programming is the best of both worlds: Testing, debugging, and reasoning can be performed on the sequential program, but the program runs faster with the addition of more processors. Indeed, most computer processors themselves implement deterministic parallelism in the form of pipelining and multiple execution units.
While it is possible to do parallel programming using concurrency, that is often a poor choice because concurrency sacrifices determinism. In Haskell, most parallel programming models are deterministic. However, it is important to note that deterministic programming models are not sufficient to express all kinds of parallel algorithms; there are algorithms that depend on internal nondeterminism, particularly problems that involve searching a solution space. Moreover, we sometimes want to parallelize programs that really do have side effects, and then there is no alternative but to use nondeterministic parallel or concurrent programming.
Finally, it is entirely reasonable to want to mix parallelism and concurrency in the same program. Most interactive programs need to use concurrency to maintain a responsive user interface while compute-intensive tasks are being performed in the background.
To try out the sample programs and exercises from this book, you will need to install the Haskell Platform. The Haskell Platform includes the GHC compiler and all the important libraries, including the parallel and concurrent libraries we shall be using. The code in this book was tested with the Haskell Platform version 2012.4.0.0, but the sample code will be updated as new versions of the platform are released.
Some chapters require the installation of additional packages. Instructions for installing the extra dependencies can be found in Sample Code.
Additionally, I recommend installing ThreadScope. ThreadScope is a tool for visualizing the execution of Haskell programs and is particularly useful for gaining insight into the behavior of Parallel and Concurrent Haskell code. On a Linux system, ThreadScope is probably available direct from your distribution, and this is by far the easiest way to get it. For example, on Ubuntu, you can install it through a simple:
$ sudo apt-get install threadscope
For instructions on how to install ThreadScope on other systems, see the Haskell website.
While reading this book, I recommend that you have the following Documentation in hand:
- The GHC User’s Guide.
- The Haskell Platform library documentation, which can be found on the main Haskell Platform site. Any types or functions that are used in this book that are not explicitly described can be found documented there.
- Documentation for packages not in the Haskell Platform, which can be found on Hackage. To search for documentation for a particular function or type, use Hoogle.
It should be noted that the majority of the APIs used in this book are not part of the Haskell 2010 standard. They are provided by add-on packages, some of which are part of the Haskell Platform, while the rest are available on Hackage.
The sample code is collected together in the package
parconc-examples
on Hackage. To download and unpack it, run:
$ cabal unpack parconc-examples
Then, install the dependent packages:
$ cd parconc-examples $ cabal install --only-dependencies
Next, build all the sample programs:
$ cabal build
The parconc-examples
package will be updated as necessary to follow
future changes in the Haskell Platform or other APIs.
Get Parallel and Concurrent Programming in Haskell now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.