The scalability, flexibility and reduced cost promised by serverless architecture resulted in a massive growth rate of 75% compared to other AWS cloud services. That was compelling enough for us to get our feet into the Serverless computing. It’s not a magic bullet for everything but there are certain use cases where it outperforms other cloud services.

In this blog, we share our experience — both perks, and quirks we have faced with the serverless architecture.

First, a little bit of introduction, the diagram below shows the deployment architecture we’ve used over time.

Monolith Architecture

A monolith application has all services like authentication, queue, and different modules as a single application in a single server. It used to be our way of developing applications.

This architecture has served us well in some applications. We have created numerous MVP and POCs following this architecture.

The pros of this architecture are

Less maintenance cost

Easier testing

Good initial performance

Enables rapid application development

Microservices Architecture

Moving on, to maintain independent modules and larger development teams, we started fiddling with the microservices architecture. We slowly started to move every new project to follow this architecture. It is highly scalable, better organized, and quickly deployable.

Nevertheless, there were some common challenges with both monolithic and microservices architecture like:

Auto-scaling

Network Security

Infrastructure Pricing

Operational Cost

High Development Cost

Environment Incompatibility

We solved autoscaling and environment incompatibility in the development and production environment with containerization using Docker.

However, they still use IaaS (EC2 Instances) which adds cost when the application state is idle. We saw that this cost is very high. Due to pre-provision or over-provision of storage and computing, there is always an overhead of cost. This was one of the reasons we started experimenting with serverless architecture.

Serverless Architecture

Serverless computing is a cloud-computing model in which the cloud provider manages the server, and dynamically manages the allocation of machine resources.

This kind of environment runs on top of an OS and utilizes virtual machines or physical servers. However, the responsibility for managing or provisioning the infrastructure belongs entirely to the service provider. The pricing model uses a metric of actual resources consumption.

Advantages of Serverless

Less operational complexity

Scale within a second

High availability

Multiple programming language support

Lower development cost

Secure infrastructure

Easily create/develop microservices

Faster release cycle

Limitations of Serverless

Latency and concurrency, there is the terminology of warm and cold function

Memory limit, can’t do heavy computing

Billing attack if not properly monitored

Moving to Serverless

We first experimented with an authentication module developed as a microservice. We hosted it in AWS Lambda services and chose the SAM framework provided by AWS for rapid application development.

Our transition to serverless was accompanied by a few hiccups. Here are a few things to consider if you are moving to serverless architecture in AWS.

RDS connectivity

Internet connectivity

NAT Gateway

Roles and Policy management

Secret Credential management

The authentication microservice needed to connect to an existing application database. We quickly figured out that to get access to an RDS from Lambda service we needed to attach our AWS Lambda to a VPC. However, if we attached AWS Lambda to a VPC, it did not have internet connectivity.

Finally, we designed a network architecture so that our AWS Lambda could have access to RDS as well as the internet. The solution — we had to launch a NAT gateway instance.

In the diagram below, we have attached AWS Lambda function to a private network. All the internet traffic is passed to the NAT Router which forwards traffic to NAT gateway. The internet gateway handles Internet traffic from public subnet and NAT Gateway. We hosted RDS and NAT gateway on a public subnet. The yellow box represents that the AWS Lambda and RDS are in the same network or VPC.

Network Architecture

We continuously worked to design our serverless app architecture. We also wanted to ensure our team had a proper deployment pipeline that generally has three stages — dev, qa, and production. And, we finally came to this.

Deployment Architecture

Let’s walk through some of the AWS services we’ve used:

Route53 DNS: Our application’s root domain including the subdomains represent the stages of our app.These domain names are registered via Route53 DNS. CloudFront: There is one CloudFront service associated with each deployment stage. The primary function of CloudFront is to cache the static content and to route the API and app URI using regex. API Gateway: Each CloudFront service passes the API request to its respective API Gateway; all the microservices are attached to this gateway. S3 Bucket: The blue S3 bucket hosted static content, and a red S3 bucket hosted our AWS Lambda function. CloudWatch: We have also set a pricing alarm in CloudWatch to get notified if something unusual occurs.

To simplify credential management, we integrated Vault for storing the secrets. We also designed the CI/CD pipeline in GitLab for respective git branches to ease the process of deploying code and maintaining them.

Conclusion

After a few weeks of the experiment, we deployed our app on serverless architecture. Our development team can now focus on simply writing the code.

We have found serverless to be very well suited for adopting a microservice architecture without the hassle of maintaining the servers, scalability and availability headaches.

Share your thoughts on what you are doing with serverless.