For the past few years, I’ve been building a website to host free programming tutorials. This site started in 2014 as a passion project, and represents thousands of hours of blood, sweat, and tears — it’s my pride and joy.

Now, I realize it’s not perfect — but it’s free, and provides an opportunity for me to hone my own skills while encouraging others to learn programming. My passion project has turned into a successful and growing side project which needed to scale without costing me a fortune.

In the beginning — there was a server

The site originated in 2014 as nothing more than a handful of game development tutorials which reflected my interests. By January of 2016, the site had reached a grand total of 150–200 users per day.

Over the years this traffic continued to grow, and the demand on my solitary server continued to increase. The additional traffic required an upgrade to my Linode server instances — this allowed the Laravel-based website to continue running without collapsing for the short-term.

After a very long time building out my site in Laravel and playing with things to determine what works, I ended up migrating to Hugo — a static site builder that doesn’t require PHP or a fancy framework.

At the time, moving to Hugo improved the speed of the site to a fair extent. It also reduced server costs since I was no longer relying on a MySQL database to host the content. I also ended up drinking the AWS kool-aid and migrating my site onto a T2 micro instance.

The architecture diagram for the site looked something like this:

For a while, things were quiet.

My build pipeline was sorted. I was writing new content and trying to improve the site. And the traffic was growing. It was a peaceful time.

The challenge — performance

After a while, I started to notice the website began to slow down. Website performance was starting to noticeably degrade, and I had an inkling as to why this was happening.

When using the popular Website Speed Test tool Pingdom, at peak usage times, I was seeing massive Wait times of around 0.8s on average before all of the other requests would start resolving. The final load time was roughly 1.5–2.5 seconds.

Notice in this example how there’s a large wait time on the second request after the DNS resolution. These metrics were taken during a period of fairly minimal traffic, but it provides some insight on the performance degradation.

The plan — considering my options

Going forward, I figured there were two choices:

Spin up a load balancer and an auto-scaling group, and deploy my application to however many T2 instances were needed. Migrate away from T2 and into S3 with CloudFront.

Option 1

The first option was very appealing to me. I had recently been learning Terraform and creating resilient Go APIs, and had already configured my site to use LetsEncrypt for HTTPS only traffic.

However, this option would not be cheap. This approach meant increased costs and idle CPU capacity with more management required. My AWS invoice was already creeping up every month as I worked with more services and built more tutorials — so I was apprehensive about adding more instances to my bill.

Option 2

The second option was something I thought would be far riskier. I’d have to find a way to migrate my live site from a T2 instance to an S3 bucket using CloudFront and Route53 on the front-end to manage traffic.

I’m a massive advocate of everything being encrypted, so it was a hard requirement that the site would continue to be served via HTTPS. Thankfully, this could easily be achieved by using Amazon’s ACM service and requesting a certificate using email verification

The decision — migrate to a static site on S3

The first port of call when it came to migrating my site was the CI/CD pipeline. This was incredibly simple with TravisCI and looks like this:

language: python install: - wget https://github.com/gohugoio/hugo/releases/download/v0.34/hugo_0.34_Linux-64bit.deb - sudo dpkg -i hugo*.deb script: - hugo --buildDrafts - cp -r scripts public/ deploy: provider: s3 ... all my S3 creds

Every time someone commits to the master branch of my repo, this kicks-off a build and uploads it to my production S3 bucket.

Once the CI/CD pipeline was in place, I pushed through a simple change and ensured the bucket’s configuration was correct.

Et Voila! — the static files I would be serving were all there and ready to go.

After I was confident in the rest of my CloudFlare configuration and the Route 53 setup, I migrated the nameservers of my site to point to Amazon’s. After 24 hours I was good to go and I had successfully completed the migration. I could now decommission the original T2.micro instance.

When configuring the CloudFront distribution, I chose for my site to be cached and served from all possible edge locations.

This means that whenever someone from Australia comes and requests my site, a cache lookup will be done first at the edge location — and if it’s a cache hit, the site’s load time will be incredibly low. Otherwise, it’ll be served from the origin S3 bucket and cached for any subsequent requests.

The map of all of AWS’ Edge locations

Whilst this option costs more, the performance increases are definitely worth it and the total cost will still fall well short of my previous monthly spend.

Special thanks to Alan Reid who helped me with the S3, CloudFront and Route 53 configuration

The results — faster, cheaper, better!

After the migration was complete, the results were astounding. I had dropped my load time from around 1.5s — 2.5s down to a freaking 234ms as highlighted in the below screenshot. This is an insane performance improvement.

Have a look at these request times, the largest proportion of time is now spent on DNS resolution and even then it’s just over 0.1s.

We are no longer in Wait hell!

According to Pingdom, my site is now faster than 99% of all other sites tested. And as you can see, there are still a few more things that I can improve upon!

But the most astounding thing about this migration is that not only has it drastically improved the performance of my site, it’s also improved the resiliency as I’m no longer reliant on a single T2 instance keeping me going.

Regarding the economics, the costs of hosting the site have been reduced from roughly $20/month down to around $7/month:

Route53 is about $0.50/month

CloudFront is about $6.50/month

And a small amount is being spent on S3 bucket storage

This is an incredible saving and puts my mind at ease. If ever should the stars align and my site sees a massive surge of traffic, it will be resilient without costing me a fortune!

Conclusion

Having sampled numerous different ways of deploying and hosting a static site, this is without a doubt one of the best approaches.

It’s that perfect blend of performance and cost. The simplicity makes it an accessible choice for everyone looking to host their own sites that are massively resilient and hugely scalable.

Hopefully, you found this thrilling tale of turbulence and triumph interesting! If you did and wish to support me then please feel free to check out my YouTube channel or add me on LinkedIn!