Effective caching with WordPress is not always a straightforward as it should be. WordPress has had the same architecture for many years, but it’s a large undertaking to revamp it. It means losing compatibility with hundreds of thousands of plugins and themes. Because of this WordPress runs a bit slower and needs some finer attention in order to get it running at high performance. WordPress likes a lot of memory and High CPU. Things that not all Shared hosting or VPS servers are going to give up cheaply. If you have a $5 or $10 VPS you might edge our a few hundred connections a second or less without doing anything.

This is not a tutorial so much as it is an outline for what happens when performance is ignored.

Many articles are going to focus on server configuration and while this is important. What I will cover are my findings of a large WordPress multisite that was on very expensive hosting and still suffered from poor performance.

Companies like GoDaddy, Dreamhost, and Digital Ocean have advantages and disadvantages.

Shared hosting has had a rough history of poor performance and while this is still reasonably true, It’s not as bad as it used to be. WordPress is so popular that almost all hosting companies have tools in place to minimize the load on their systems.

High load is expensive for hosts, So It’s in their best interest to optimize these products to an extent.

GoDaddy for example as a very good caching system that uses Redis, This may not be on all plans but it’s definitely on the affordable monthly plans. The GoDaddy network is also not prone to issues involving high traffic spikes. BUT, they still have the same overcrowding issues and while you can request to have your site moved to a new server, it’s on you to make that request.

A VPS like one from Digital Ocean, on the other hand, has guaranteed system resources, It’s going to perform better overall if you have configured your system correctly but now Memory usage is going to be your concern and is what will drive up the cost.

While many VPS servers are affordable starting around $5-$10 for 1GB of memory, you still have to run the entire OS and all of your environmental dependencies. MySQL, PHP, Apache/Nginx, Memcached, and so on.

Implement your own caching strategy

So it’s up to you to implement your own caching strategy wisely.

Personally, I use a combination of Memcached and CloudFlare to keep my site fast. But my server might do a couple of dozen requests per minute.

I’m not really looking at caching as an effective option for budget hosting but rather what happens when you take a large site and don’t consider the outcomes.

I am also going to explain the scenario where caching became a necessary part of the performance.

I recently spent a number of months working on a large process, service, and memory-intensive WordPress site. The very nature of this site was demanding in that most of its content came from Elastic Search queries run on a custom WordPress plugin. It supported MultiLingual content and had a less than conventional DNS setup with a WordPress plugin handling most of the DNS, Multilingual and Multisite routing.

The latency between the Elastic Search was nearing 120ms as shown by Newrelic.

These ES queries were processed and then presented in the WordPress API for consumption.

With over a hundred network sites and much more on their way, WordPress CRON jobs were becoming an issue as well.

The Default transient tables were nearing thousands of entries.

An error logging plugin was used and generated well over 1 million rows in less than 2 months.

The site used Timber templating without caching.

Multiple Shortcake UI widgets with their own API calls.

The multilingual plugin had an update that caused it to phone home on every single request causing further delays of about 180ms.

There were many moving parts and the hosting used in my opinion was ill-prepared for this site and its requirements. Whether or not technical requirements were addressed I don’t know.

The initial server-side load times were nearly 1,000ms spent in PHP and another 1,100 to 1,200ms doing external requests to Elastic Search and other calls.

The first step was to upgrade from PHP 5.6 to PHP 7.0, this alone had an overall reduction in load times by nearly 600ms giving us an average of about 1,200ms. Not good, but better.

I had researched a few different caching options at first.

WordPress out of the box without any configuration will use Transients, A simple way of storing serialized data in the database. From what I saw at scale, this created massive database tables in a large network site. It also didn’t address any of the issues we experienced because it was all caused by custom plugins and themes.

I spent some time to track performance problems and eventually came up with a simple and effective plan where I could implement caching in these custom plugins and themes. Once in place, I tried Memchached, then Redis, and eventually settled on Lcache. Simply caching the Elastic Search calls reduced the server times by nearly 40%! Remember one call was about 180ms but there could be multiple calls per page made.

Lcache also helped Timbers performance drastically with an overall reduction of 600 to 1000 ms for the client.

In closing

You can only throw so much money at a problem before you are forced to fix it. Multi-thousand dollar hosting with poor performance is obviously not a good solution. Taking the time to profile and monitor a websites bottlenecks is very important. Especially when your site is calling External API’s.