How a Simple Caching Strategy Can Break a System

Rules for an optimized caching mechanism

For every complex problem, there is an answer that is clear, simple, and wrong.

Let’s look at how a typical cache miss workflow looks like

A cache miss workflow

This works reasonably well as long as the request per key is low. As the number of concurrent requests increases so too the cache miss rate will increase. This would result in a higher load on the database. In the worst case, this can spike your database and request will start timing out. This workflow will look like this.

Tip

Use Bit to easily encapsulate modules/components with all their dependencies and setup. Share them on Bit’s cloud, collaborate with your team and use them anywhere.

ENCAPSULATE. SHARE. COLLABORATE. REUSE.

| Bit.dev |

Let’s create a small demo to reflect this.

We will simulate cache request which returns in 5ms

We will simulate database request which returns in 1 second.

Below are two cache functions for set and get .

Below is the method to get data from the database.

The below code simulates 10 concurrent requests for a cache miss.

The output of the above code will be

total calls made to cache 20. total calls to db 10

This, of course, is something we were not hoping for. Actually, in this case, this turned out to be worse than putting a cache before it as we are making 20 extra cache calls!

This is known as cache stampede