Chapter 13. Concurrent Performance Techniques

In the history of computing to date, software developers have typically written code in a sequential format. Programming languages and hardware generally only supported the ability to process one instruction at a time. In many situations a so-called “free lunch” was enjoyed, where application performance would improve with the purchase of the latest hardware. The increase in transistors available on a chip led to better and more capable processors.

Many readers will have experienced the situation where moving the software to a bigger or a newer box was the solution to capacity problems, rather than paying the cost of investigating the underlying issues or considering a different programming paradigm.

Moore’s law originally predicted the number of transistors on a chip would approximately double each year. Later the estimate was refined to every 18 months. Moore’s law held fast for around 50 years, but it has started to falter. The momentum we have enjoyed for over 50 years is increasingly difficult to maintain.

The impact of the technology running out of steam can be seen in Figure 13-1, a central pillar of “The Free Lunch Is Over,” a 2005 article written by Herb Sutter that aptly describes the arrival of the modern era of performance analysis.1

We now live in a world where multicore processors are the norm. Well-written modern applications must take advantage of distributing application processing over multiple cores. Application ...

Get Optimizing Cloud Native Java, 2nd Edition now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.