Lessons learned building TheOtherMail entirely on AWS infrastructure.

TheOtherMail In a Nutshell

We wanted to build a service that would allow users to create temporary email addresses (@theothermail.com) that would forward incoming mail to the users' personal email account.

At its core, the service consisted of four components:

Simple web app for users

Database to store email and forwarding addresses

SMTP service

Forwarding agent

Sounds pretty simple for a side project.

Iteration 1

When we first started off, we had grand dreams that would one day have millions of active users and thousands of concurrent emails being forwarded.

As such, we decided that our web app would live on Elastic Beanstalk and our database would be hosted on RDS. The advantages of this setup were:

Code deployments would be effortless

No management of web servers

Free SSL with Certificate Manager

Scaling the web tier would be easy with Auto Scaling Groups

We wouldn't have to worry about managing the database or connection pools

On the email side of things, we decided to use Simple Email Service (SES) as our mail server, S3 as our intermediate data store, and a Lambda function to perform the forwarding. The advantages of this setup were:

No management of a mail server and easy SPF & DKIM

No need to manage a dedicated microservice when a simple forwarding function would suffice

CloudWatch logging for Lambda would make debugging and monitoring incredibly easy

The Problem

Our service worked perfectly with this setup. Technically, there was nothing wrong with the decisions we made regarding the services and technologies we chose. But we knew we had to make some changes...

The monthly bill for our web tier was ~$50-60/mo (primarily due to ELB and RDS). It might not sound like a lot, but when you don't have scale (< 1,000 users) or a revenue model, it's a lot to pay for (especially for a side project).

Meanwhile, our email backend was only setting us back a few pennies. The only downsides of the email setup were the restrictions Amazon imposed on SES, specifically egress and API limits. But, they turned out to work in our favor because we never came close to exceeding our allocated quota.

Iteration 2

We decided to move off Elastic Beanstalk + RDS and switch over to a single EC2 instance.

Now, we would need to set up:

Webserver (Nginx)

WSGI (uWSGI)

SSL (Let's Encrypt)

Database (MariaDB)

Setting up everything didn't take too long, but it was definitely more work than using Beanstalk + RDS. Once everything was configured, we wrapped it all up in systemd unit files.

After making the changes, our costs dropped to ~$10/mo. Of course, we made lots of tradeoffs in making the change. Most notably:

Deployments would result in a small amount of downtime

Our web app and database were bottlenecked by the same resource constraints since they lived on the same machine

Since we weren't dealing with problems at scale, these compromises were acceptable.

Lessons Learned

Don't build for scale unless you have millions of users lined up. AWS services that charge based on use (not hour) are great for low volume workloads. Lambda works great for atomic functions / services. Monolith first and scale out web / data tiers as volumes increase.

For future projects, we will lean more towards going serverless. For the price of complexity and debugging, applications built on Lambda appear to scale (computationally and financially) beautifully.