How to Implement Nginx Caching

Before jumping on the Nginx configurations, we must make sure that the Rails application deployed has a copy of the precompiled assets and is configured to serve static assets. The first task can be done by including the below line in the application Dockerfile.

bundle exec rake assets:precompile

Configuring the Rails application to serve static assets is also an easy task; we only need to make sure that the below environment variable is set on the running container.

RAILS_SERVE_STATIC_FILES: 'true'

The “ngx_http_proxy_module” is an Nginx built-in module that allows passing requests from Nginx to another server. In our case, the other server is the Rails application. This module provides sophisticated configuration items to configure caching for Nginx upstream servers. The below snippet shows a complete Nginx configuration file that is used to cache the static content of the Rails application.

The first thing that can be noticed in the above configurations is the directive proxy_cache_path . This directive is used to set the following cache configurations:

The location of the cache files. Cache data are stored in files and the MD5 function is used to name cache files.

function is used to name cache files. The parameter levels defines hierarchy levels of a cache, from 1 to 3; each level accepts the values 1 or 2.

defines hierarchy levels of a cache, from 1 to 3; each level accepts the values 1 or 2. keys_zone=blog:10m defines the name of the cache and the size of the memory space that will hold the keys and metadata information.

defines the name of the cache and the size of the memory space that will hold the keys and metadata information. max_size=1g specifies the maximum storage size that NGINX can use for the cache.

specifies the maximum storage size that NGINX can use for the cache. inactive=60m specifies the maximum cache lifetime of cache objects.

proxy_cache_key is the second-used directive. This item needs to be unique for each of the cached requests and it will be used to generate the cache file names.

Next, the Nginx configurations define two locations. The first will be used for serving the static assets where the cache is enabled and the other location for serving the dynamic content where the cache is disabled.

The last step is to enable the cache for the requests forward to the Rails application. Below is a brief description of the most important configurations needed to chive this task:

proxy_cache : Defines the used cache memory zone.

: Defines the used cache memory zone. proxy_cache_valid : Defines the caching time for the web responses.

: Defines the caching time for the web responses. proxy_cache_bypass : Defines conditions where the request will not be taken from the cache.

: Defines conditions where the request will not be taken from the cache. proxy_no_cache : Defines conditions where the response will not be saved to the cache.

: Defines conditions where the response will not be saved to the cache. add_header X-Cache-Stats : Add a header to the response that specifies if the request was a cache hit or cache miss.

Testing and verifying the caching configurations

The first thing that I would like to test is if the cache configurations are working as expected or not. To test this, I requested the same static file a couple of times and inspected the response header X-Cache-Stats to check if the request was a HIT or a MISS. The below image shows the results of my requests, and it also shows that only the first request was a MISS; the other requests were HIT requests.

The next thing that I would like to test and measure the performance of is the web application with and without the caching feature. To achieve this task, I implemented two Nginx servers, one with the caching feature and the other without it. Then I used the “boom” tool to create a number of concurrent requests on each of the servers. Below are the results of both cases:

Nginx without caching

Nginx with caching

Based on the above test, we can say that the caching feature helped to reduce the time needed to perform all the requests by five seconds. In addition, we see that Request Per Second (RPS) was increased from 83 to 98 requests.