Chapter 4. Massively Scalable Content Caching
Introduction
Caching accelerates content serving by storing request responses to be served again in the future. Content caching reduces load to upstream servers, caching the full response rather than running computations and queries again for the same request. Caching increases performance and reduces load, meaning you can serve faster with fewer resources. Scaling and distributing caching servers in strategic locations can have a dramatic effect on user experience. It’s optimal to host content close to the consumer for the best performance. You can also cache your content close to your users. This is the pattern of content delivery networks, or CDNs. With NGINX you’re able to cache your content wherever you can place an NGINX server, effectively enabling you to create your own CDN. With NGINX caching, you’re also able to passively cache and serve cached responses in the event of an upstream failure.
Caching Zones
Problem
You need to cache content and need to define where the cache is stored.
Solution
Use the proxy_cache_path
directive to define shared memory cache zones and a location for the content:
proxy_cache_path /var/nginx/cache keys_zone=CACHE:60m levels=1:2 inactive=3h max_size=20g; proxy_cache CACHE;
The cache definition example creates a directory for cached responses on the filesystem at /var/nginx/cache and creates a shared memory space named CACHE
with 60 megabytes of memory. This example sets the directory structure levels, defines the release of cached responses after they have not been requested in 3 hours, and defines a maximum size of the cache of 20 gigabytes. The proxy_cache
directive informs a particular context to use the cache zone. The proxy_cache_path
is valid in the HTTP context, and the proxy_cache
directive is valid in the HTTP, server, and location contexts.
Discussion
To configure caching in NGINX, it’s necessary to declare a path and zone to be used. A cache zone in NGINX is created with the directive proxy_cache_path
. The proxy_cache_path
designates a location to store the cached information and a shared memory space to store active keys and response metadata. Optional parameters to this directive provide more control over how the cache is maintained and accessed. The levels
parameter defines how the file structure is created. The value is a colon-separated value that declares the length of subdirectory names, with a maximum of three levels. NGINX caches based on the cache key, which is a hashed value. NGINX then stores the result in the file structure provided, using the cache key as a file path and breaking up directories based on the levels
value. The inactive
parameter allows for control over the length of time a cache item will be hosted after its last use. The size of the cache is also configurable with the use of the max_size
parameter. Other parameters relate to the cache-loading process, which loads the cache keys into the shared memory zone from the files cached on disk.
Caching Hash Keys
Problem
You need to control how your content is cached and looked up.
Solution
Use the proxy_cache_key
directive along with variables to define what constitutes a cache hit or miss:
proxy_cache_key "$host$request_uri $cookie_user";
This cache hash key will instruct NGINX to cache pages based on the host and URI being requested, as well as a cookie that defines the user. With this you can cache dynamic pages without serving content that was generated for a different user.
Discussion
The default proxy_cache_key
, which will fit most use cases, is "$scheme$proxy_host$request_uri"
. The variables used include the scheme, HTTP or HTTPS, the proxy_host
, where the request is being sent, and the request URI. All together, this reflects the URL that NGINX is proxying the request to. You may find that there are many other factors that define a unique request per application, such as request arguments, headers, session identifiers, and so on, to which you’ll want to create your own hash key.1
Selecting a good hash key is very important and should be thought through with understanding of the application. Selecting a cache key for static content is typically pretty straightforward; using the hostname and URI will suffice. Selecting a cache key for fairly dynamic content like pages for a dashboard application requires more knowledge around how users interact with the application and the degree of variance between user experiences. Due to security concerns you may not want to present cached data from one user to another without fully understanding the context. The proxy_cache_key
directive configures the string to be hashed for the cache key. The proxy_cache_key
can be set in the context of HTTP, server, and location blocks, providing flexible control on how requests are cached.
Cache Bypass
Problem
You need the ability to bypass the caching.
Solution
Use the proxy_cache_bypass
directive with a nonempty or nonzero value. One way to do this is by setting a variable within location blocks that you do not want cached to equal 1:
proxy_cache_bypass $http_cache_bypass;
The configuration tells NGINX to bypass the cache if the HTTP request header named cache_bypass
is set to any value that is not 0
.
Discussion
There are a number of scenarios that demand that the request is not cached. For this, NGINX exposes a proxy_cache_bypass
directive so that when the value is nonempty or nonzero, the request will be sent to an upstream server rather than be pulled from the cache. Different needs and scenarios for bypassing cache will be dictated by your applications use case. Techniques for bypassing cache can be as simple as a using a request or response header, or as intricate as multiple map blocks working together.
For many reasons, you may want to bypass the cache. One important reason is troubleshooting and debugging. Reproducing issues can be hard if you’re consistently pulling cached pages or if your cache key is specific to a user identifier. Having the ability to bypass the cache is vital. Options include but are not limited to bypassing the cache when a particular cookie, header, or request argument is set. You can also turn off the cache completely for a given context such as a location block by setting proxy_cache off;
.
Cache Performance
Problem
You need to increase performance by caching on the client side.
Solution
Use client-side cache control headers:
location ~* \.(css|js)$ { expires 1y; add_header Cache-Control "public"; }
This location block specifies that the client can cache the content of CSS and JavaScript files. The expires
directive instructs the client that their cached resource will no longer be valid after one year. The add_header
directive adds the HTTP response header Cache-Control
to the response, with a value of public
, which allows any caching server along the way to cache the resource. If we specify private, only the client is allowed to cache the value.
Discussion
Cache performance has many factors, disk speed being high on the list. There are many things within the NGINX configuration you can do to assist with cache performance. One option is to set headers of the response in such a way that the client actually caches the response and does not make the request to NGINX at all, but simply serves it from its own cache.
Purging
Problem
You need to invalidate an object from the cache.
Solution
Use the purge feature of NGINX Plus, the proxy_cache_purge
directive, and a nonempty or zero-value variable:
map $request_method $purge_method { PURGE 1; default 0; } server { ... location / { ... proxy_cache_purge $purge_method; } }
In this example, the cache for a particular object will be purged if it’s requested with a method of PURGE
. The following is a curl
example of purging the cache of a file named main.js
:
$ curl -XPURGE localhost/main.js
Discussion
A common way to handle static files is to put a hash of the file in the filename. This ensures that as you roll out new code and content, your CDN recognizes it as a new file because the URI has changed. However, this does not exactly work for dynamic content to which you’ve set cache keys that don’t fit this model. In every caching scenario, you must have a way to purge the cache. NGINX Plus has provided a simple method of purging cached responses. The proxy_cache_purge
directive, when passed a nonzero or nonempty value, will purge the cached items matching the request. A simple way to set up purging is by mapping the request method for PURGE
. However, you may want to use this in conjunction with the geo_ip
module or simple authentication to ensure that not anyone can purge your precious cache items. NGINX has also allowed for the use of *, which will purge cache items that match a common URI prefix. To use wildcards you will need to configure your proxy_cache_path
directive with the purger=on
argument.
Cache Slicing
Problem
You need to increase caching effiency by segmenting the file into fragments.
Solution
Use the NGINX slice
directive and its embedded variables to divide the cache result into fragments:
proxy_cache_path /tmp/mycache keys_zone=mycache:10m; server { ... proxy_cache mycache; slice 1m; proxy_cache_key $host$uri$is_args$args$slice_range; proxy_set_header Range $slice_range; proxy_http_version 1.1; proxy_cache_valid 200 206 1h; location / { proxy_pass http://origin:80; } }
Discussion
This configuration defines a cache zone and enables it for the server. The slice
directive is then used to instruct NGINX to slice the response into 1 MB file segments. The cache files are stored according to the proxy_cache_key
directive. Note the use of the embedded variable named slice_range
. That same variable is used as a header when making the request to the origin, and that request HTTP version is upgraded to HTTP/1.1 because 1.0 does not support byte-range requests. The cache validity is set for response codes of 200
or 206
for one hour, and then the location and origins are defined.
The Cache Slice module was developed for delivery of HTML5 video, which uses byte-range requests to pseudostream content to the browser. By default, NGINX is able to serve byte-range requests from its cache. If a request for a byte-range is made for uncached content, NGINX requests the entire file from the origin. When you use the Cache Slice module, NGINX requests only the necessary segments from the origin. Range requests that are larger than the slice size, including the entire file, trigger subrequests for each of the required segments, and then those segments are cached. When all of the segments are cached, the response is assembled and sent to the client, enabling NGINX to more efficiently cache and serve content requested in ranges. The Cache Slice module should be used only on large files that do not change. NGINX validates the ETag each time it receives a segment from the origin. If the ETag on the origin changes, NGINX aborts the transaction because the cache is no longer valid. If the content does change and the file is smaller or your origin can handle load spikes during the cache fill process, it’s better to use the Cache Lock module described in the blog listed in the following Also See section.
Also See
Smart and Efficient Byte-Range Caching with NGINX & NGINX Plus
1 Any combination of text or variables exposed to NGINX can be used to form a cache key. A list of variables is available in NGINX: http://nginx.org/en/docs/varindex.html.
Get NGINX Cookbook now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.