When I started my company, Clarity Hub, my first goal was to make sure I could test and deploy new versions as fast as possible. I set up Terraform and deployed to AWS using EC2 micro instances for each microservice that we had.

It was great. We had a staging and production environment and with Docker+Docker Compose every dev was able to run a full environment locally on their machine.

The problem came once we hit around 12 microservices. Our staging environment costs were getting really high. Since we wanted to replicate production in our staging environment, we had an EC2 instance for each microservice. But these machines were sitting around doing absolutely nothing.

When we started talking about doing feature environments so that we could do full QA testing on feature branches, we realized that replicating production’s EC2 instance set up would not be feasible.

A Minor Pivot

It was around this point that Clarity Hub pivoted from an all-in-one Customer Support platform, to being integration focused. We would provide the bare-bones chat and AI functionality, and have a robust integration and API driven platform.

We looked at AWS lambda to write our integrations in, but it didn’t fit how we were used to developing our microservices.

A Major Pivot

Around December, the team decided to ditch the customer support platform, and instead integrate our machine learning tools into existing platforms. We also decided we would ditch all the EC2 containers and instances in favor of using AWS Lambda.

I looked for tools to help us manage all these AWS Lambda functions, and came across Serverless (among a bunch of other tools).

Our stack changed from managing our own messaging system to using serverless and using way more of AWS’s offerings

We begin cranking away at creating an integration with Intercom where we would analyze conversations that support staff had with customers, and bring real-time suggestions to the support staff as they had new conversations with customers.

In Part 2, I’ll talk about the growing pains we had going from servers to serverless, but for now know, that is wasn’t an easy switch going from servers to serverless. We were lucky since we had already built stateless microservices for the most part. Porting these microservices was extremely easy thanks to the large Serverless community.

Why AWS Lambda

There are other serverless platforms out there by Google and Azure, so why did we pick AWS Lambda?

We already loved the AWS ecosystem We were already using RDS, SNS, Elastic, and API Gateway. And we had additional plans to use SQS and EC2 for other purposes. We had built our pipelines and tooling to deploy and monitor AWS and didn’t want to retool to try out Google Cloud Functions

Why Serverless

When looking for a framework to write and deploy our AWS Lambda code, we looked for features that would help us focus on writing and delivering code. The ecosystem has tools from barebones (like AWS Lambda Toolkit) to fully features frameworks (like Apex). We chose Serverless because:

It had an amazing ecosystem — check out this awesome-serverless list It let us focus on creating Lambda functions Configuration was simple It uses AWS CloudFormation behind the scenes, making deployment insanely simple

Conclusion

The team is really happy to have moved to Serverless — albeit with a few growing pains. It lets us focus on writing features, instead of maintaining and deploying servers.