As you can see, AWS Fargate is really a launch type for AWS ECS. Many DevOps staff consider AWS Fargate a serverless container solution within the AWS ecosystem. Check out this guide to get started with AWS Fargate.

What Does “AWS EKS on Fargate” Mean?

As was mentioned earlier, traditional deployments of Kubernetes clusters on AWS require manual management and provision of worker nodes. In other words, the infrastructure that runs the pods is your responsibility. However, AWS’ new approach not only allows you to run the cluster’s master node in a managed fashion, it also manages the infrastructure in which pods are run via a Fargate Profile. Running a Kubernetes cluster on Fargate means that the infrastructure is fully managed, freeing you from all provisioning work. With this arrangement, you can forget about updating, patching, and securing and get focused on building your apps. AWS documentation summarizes all the benefits of this new EKS extension well, saying, “With Amazon EKS and AWS Fargate, you get the serverless benefits of Fargate, the best practices of Amazon EKS, and the extensibility of Kubernetes out of the box.”

Running AWS EKS on AWS Fargate is the path to follow if you want an entire Kubernetes cluster running under the serverless compute model.

Use Cases

A few use case scenarios for this mix of services are:

Small teams . If you’re developing an application at scale (with containers) and your team is small (less than five developers), there likely won’t be enough time for provisioning or manually managing EC2 resources. When there’s no time for infrastructure, a managed serverless offering like AWS EKS on Fargate is the way to go.

Prototyping . If you’re using containers and you need to validate an approach, don’t waste your time on infrastructure provisioning. Go serverless and get insights earlier. Later on, if the approach is viable and approved, you can redeploy the application with a better and more thoughtful architecture.

Legacy projects. When your team gets a maintenance project which runs on Kubernetes and none of the developers are familiar with the orchestrator, run the pods on Fargate. That way, the team can buy some time to familiarize themselves with how Kubernetes works. Later on, they can switch to an EC2 cluster for the pods and make customizations as needed.

Limitations

Running AWS EKS on Fargate presents some limitations that are important to consider while defining a deployment approach. These include the following:

There is no support for stateful workloads. This means that your containers or pods cannot have volumes or any file system-like mechanism. Pods with states or persistence layers need to be run on regular EC2 clusters. Only AWS Application Load Balancers are supported . In other words, your pods will receive traffic only via HTTP protocol. The physical capacity of a pod on Fargate has a maximum of 4 CPUs and 30 GBs of memory . You will get locked into a vendor.



Keep in mind that these limitations are for pods that run on Fargate. If any of these restrict you, you still can run pods on an AWS EC2-based cluster and have a mix of both models of execution.

Deploying a Small API on AWS EKS on Fargate

As an example, we’re going to deploy a small endpoint with AWS EKS on Fargate. The endpoint will be an HTTP GET handler written in Golang which returns a welcome message.

To do this, we’ll use the following tools:

AWS CLI : Required for authentication.

Docker : Required for building and pushing the Docker image.

kubectl : The CLI needed to interact with the Kubernetes API.

eksctl : The official CLI needed to create an EKS cluster.

Describing the API

Method URL GET /

As mentioned above, the API consists of only one endpoint which maps to the root URL, so, when you hit it, you’ll receive a welcome message. On local, this message is: