Chapter 9. Web Caching
Caching is one of the most useful features built on top of HTTP’s uniform interface. You can take advantage of caching to reduce end user perceived latency, to increase reliability, to reduce bandwidth usage and cost, and to reduce server load. Caches can be anywhere. They can be in the server network, content delivery networks (CDNs), or in the client network (usually called forward proxies).
It is common to use the word cache to refer to either an object cache such as memcached (http://memcached.org/) or HTTP caches such as Squid (http://www.squid-cache.org/) or Traffic Server (http://incubator.apache.org/projects/trafficserver.html). Both of these kinds of caches improve performance and have key roles to play in the overall web service deployment architecture. But there is an important difference between these two. HTTP caches such as Squid do not require clients and servers to call any special programming API to manage data in the cache. This is not the case with object caches. For instance, in order to use memcached, you must use memcached’s programming API to store, retrieve, and delete objects. HTTP caches are based on the same uniform interface that clients and servers use. Therefore, as long as you are using HTTP as defined, you should be able to add a caching layer without making code changes.
Tip
Since a cache can be both an HTTP client and a server, in caching-related discussions, the term origin server is used to differentiate between caching servers ...
Get RESTful Web Services Cookbook now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.