TL;DR

Looking for reusable development and production CI/CD pipelines for AWS Lambda (with Serverless Framework)? Just open your Codefresh account and copy two pipelines from this post, provide AWS credentials and configuration and start writing your serverless functions. These CI/CD pipelines will streamline changes to testing and production environments automatically.

Motivation

Microservice architecture plays a major role in many organizations today. The adoption of microservice architecture has grown significantly in the last few years. Docker containers and container cluster technologies, like Kubernetes and Swarm, simplified adoption of microservices, especially the way they are packaged and deployed. Still, when you are using containers to develop and deploy your microservices, there is an additional price you have to pay: you still need to know containers and container clusters pretty well and keep this knowledge up to date.

On the other hand, with serverless technologies, you care less about the deployment and operation of driverless applications. For this and other reasons, serverless technologies like function as a service (FaaS) became widely used by many companies. A serverless architecture provides an additional and more granular way to implement microservices, spending less effort on developing, running, and scaling individual microservices.

When a company selects a serverless architecture to implement microservices, there is still a need to define how to manage serverless code, and how to build scalable, fast and effective CI/CD pipelines.

Managing Serverless code

In serverless architecture, there is usually a single event handler function. There may also be additional helper functions and imported libraries.

This function should follow the single responsibility principle. And if it does, one should not expect to find unreasonably long functions with thousands of lines of code.

Besides a function code, there is also a need to define a function deployment configuration that can include additional resources, permissions, and endpoints. For AWS Lambda function, it could be a CloudFormation template that configures multiple AWS services like S3, IAM, DynamoDB, API, Gateway, and others.

Mono repositories

One approach of managing multiple serverless microservices is to put all of them into a single code repository, also called a “mono repository”. While this can simplify code management and encourage sharing, it can also create undesired coupling between different services and force non-effective CI/CD.

Once you commit a code to a mono repository, most existing CI/CD tools cannot distinguish what had been changed and thus do a full rebuild, test, and deploy for all services in this mono repository. This makes the whole process of CI/CD highly ineffective and very slow.

If you select a mono repository for code management, make sure to search for a CI/CD tool that can natively support mono repositories (i.e. can understand the essence of change and can run CI/CD only for modified services, skipping unchanged services).

Micro repositories

Another approach is to keep each service in its own code repository. I will call this approach “micro repository”, similar to microservice. The idea is that each of these repositories is pretty small, containing one function, configuration, and required dependency. You should also expect that there will not be too many changes for micro repositories, once code matures.

When you modify one service code, it does not affect all other services. But when you have to update multiple services to implement one feature, you are forced to work with multiple repositories and multiple branches, which can be large, challenging cross-cutting changes (hopefully they are not frequent). When you see a lot of cross-cutting changes, this can also mean that you have the wrong microservice architecture and there is a strong coupling between multiple services.

As for CI/CD, on one hand, it is easier using familiar processes and defining a pipeline for each repository. On the other hand, when you have too many repositories and CI/CD pipeline code is almost of the same size as a serverless function code, maintaining hundreds of almost identical CI/CD pipelines can quickly become a challenge.

So, when you practice a micro repository approach, select a CI/CD tool that either has a short and simple DSL for describing CI/CD for serverless and/or can support pipeline reuse, using the same pipeline for multiple micro repositories.

Hybrid repositories

In practice, you should choose a hybrid approach, grouping multiple serverless services from the same or adjacent domain into the same code repository. Once, you do this, there is a chance that cross-cutting change will happen within a single repository.

Tools and Services

Serverless Framework

The Serverless Framework supports building and deploying a serverless application to multiple cloud providers with a consistent experience, hiding provider differences. The Serverless Framework automatically configures cloud vendor settings, based on the used language and the target cloud provider. For example, it creates a CloudFormation template when you are using an AWS Lambda provider.

There are other similar tools, but the Serverless Framework is one of the most popular tools with a big community around it.

Serverless-friendly CI/CD

Choose a CI/CD tool that either has a built-in integration with a desired serverless platform (or cloud platform) or one that can easily be extended. This tool should also be able to support both mono repositories (understand change at a service level) and micro repositories (allow reuse of pipelines across multiple repositories).

Give Codefresh CI/CD a try. It has native support for any code management strategy you will choose.

Codefresh supports mono repositories allowing you to execute pipelines based on commit content, using glob patterns to detect changed services. Codefresh also supports pipeline reuse, where you can connect the same CI/CD pipeline to multiple non-related code repositories.

Each serverless platform/framework has its own syntax and tools to simplify development, deployment, and testing of serverless function. The CI/CD platform you choose should allow you to effectively use your tool of choice.

Being a Docker-centric CI/CD service with first-class Kubernetes integration, Codefresh runs every pipeline step within a Docker container. This provides both great isolation between pipeline steps and makes it easy to bring in your own tool. We’ve prepared a reusable Docker image with Serverless Framework on board.

Testing Serverless applications

Writing serverless applications can simplify business logic development, but can also be hard to test. First, if your serverless service depends on external cloud services or event providers, it can be challenging to emulate in order to run tests locally. There is also still a need to run real integration tests anyway.

Unit testing

To get around these challenges, try to write a testable code. Abstract any external resources and APIs you are using in serverless function with a clean interface and write stubs for these abstractions. Avoid writing platform-specific code: code that can be executed only inside the specific serverless platform. Write a main serverless function in a generic way and wrap it with a platform-specific function that does not contain any logic and just serves as a binder between platform specific code and your function.

If you can write tests for your serverless function, achieving full coverage, and can execute these tests in a single process with a testing framework only, then you did a great job writing testable code.

Also consider using static code analysis tools, linters, and security scanners to eliminate more potential issues.

Integration testing

While unit testing is must, it’s not sufficient to rely only on it on its own. You have to test a serverless function in a real battlefield. Create a real dedicated testing environment, trigger an expected event and analyze function flow, performance, and resource usage. Free unneeded resources once you complete integration testing.

Acceptance testing

Acceptance testing is a non-destructive flavor of integration testing. Run acceptance tests to validate a new deployment to production environment. Rollback if acceptance tests fail.

Codefresh CI/CD pipelines

Development CI/CD pipeline

The following pipeline defines a CI/CD flow for any development branch .

The Flow

1. Trigger

The pipeline is triggered when a developer pushes a new change to a development branch. The first implicit action taken by the pipeline is git clone command.

Then, the pipeline should get two contexts for proper execution:

commit context – commit details (commit id, branch, modified files, etc.) environment context – secrets, configurations, etc. (auto-select this context based on active branch)

2. Unit Test

This is a basic code validation step. In this step, the pipeline executes unit tests, linters, and static code analysis.

3. Setup

Working with a serverless cloud provider like Amazon AWS Lambda requires configuring credentials and target runtime environments. Consider using a separate AWS account and region for test and production environments.

One approach to setup credentials is to use a shared credentials file with two profiles: develop and production .

In order to make this file available for Codefresh CI/CD pipeline, you need to perform the following:

Encode ~/.aws/credentials (assuming this is the file you want to use) with base64 encoding without new line characters: cat ~/.aws/credentials | openssl base64 -A Create encrypted AWS_CREDENTIALS_FILE Codefresh variable with encoded content of AWS credentials file Recreate credentials file with echo -n $AWS_CREDENTIALS_FILE | base64 -d > ${PWD}/.aws/credentials command Make AWS credentials file available for all aws CLI commands, pointing AWS_SHARED_CREDENTIALS_FILE to the file location, with cf_export AWS_SHARED_CREDENTIALS_FILE=${PWD}/.aws/credentials command

4. Package

The serverless package command packages the entire AWS Lambda infrastructure into the .serverless.develop directory by default and make it ready for deployment. It is possible to specify another packaging directory by passing the --package option.

It is a good idea to create a deployment package and archive it for future use and/or traceability.

5. Deploy

The serverless deploy --package command deploys entire service via CloudFormation, using previously prepared package to a test environment.

6. Integration Test

In order to run integration tests, you need to create the event that triggers the AWS Lambda function execution. It’s possible to use aws CLI/API to create a required event, or serverless invoke command, passing event and context as command parameters.

If integration tests complete without failures, the pipeline creates a new PullRequest from the current development branch. Once merged (manually or automatically), this PullRequest will trigger the production CI/CD pipeline.

7. Cleanup

To reduce charges, it’s better to clean up all allocated AWS resources, regardless of integration tests status. The serverless remove command can do the required cleanup.

Codefresh develop CI/CD pipeline: develop.yaml

version: '1.0' steps: check_master: image: alpine:3.7 title: fail on master branch commands: - echo "cannot run this pipeline on master" - exit 1 when: branch: only: - master setup: image: alpine:3.7 title: generate AWS shared credentials file commands: - mkdir -p .aws - echo -n $AWS_CREDENTIALS_FILE | base64 -d > ${PWD}/.aws/credentials - cf_export AWS_SHARED_CREDENTIALS_FILE=${PWD}/.aws/credentials test: image: node:10-alpine title: lint and test working_directory: ${{main_clone}}/examples/aws-node-simple-http-endpoint commands: - yarn lint - yarn test package: image: codefresh/serverless:1.28 title: package serverless service working_directory: ${{main_clone}}/examples/aws-node-simple-http-endpoint commands: - serverless package --stage ${AWS_STAGE} --region ${AWS_REGION} --package ${PACKAGE} archive: image: mesosphere/aws-cli title: archive package to S3 bucket working_directory: ${{main_clone}}/examples/aws-node-simple-http-endpoint commands: - aws --profile ${AWS_PROFILE} --region ${AWS_REGION} s3 cp ${PACKAGE} s3://${AWS_BUCKET}/${{CF_BRANCH}}/${{CF_SHORT_REVISION}}/ --recursive deploy: image: codefresh/serverless:1.28 title: deploy to AWS with serverless framework working_directory: ${{main_clone}}/examples/aws-node-simple-http-endpoint commands: - serverless deploy --conceal --verbose --stage ${AWS_STAGE} --region ${AWS_REGION} --aws-profile ${AWS_PROFILE} --package ${PACKAGE} integration: image: codefresh/serverless:1.28 title: run integration test working_directory: ${{main_clone}}/examples/aws-node-simple-http-endpoint fail_fast: false commands: - serverless invoke --function currentTime --stage ${AWS_STAGE} --region ${AWS_REGION} --path test/data.json && cf_export INTEGRATION_FAILED=false || cf_export INTEGRATION_FAILED=true rollback: image: codefresh/serverless:1.28 title: rollback if integration test failed working_directory: ${{main_clone}}/examples/aws-node-simple-http-endpoint commands: - ${INTEGRATION_FAILED} && echo "rollback to previous version on error" || true - ${INTEGRATION_FAILED} && if [ ! -z "${KEEP_VERSION}" ]; then serverless rollback --verbose --timestamp ${KEEP_VERSION} --region ${AWS_REGION} --stage ${AWS_STAGE} --aws-profile ${AWS_PROFILE}; fi || true cleanup: image: codefresh/serverless:1.28 title: cleanup allocated resources working_directory: ${{main_clone}}/examples/aws-node-simple-http-endpoint commands: - serverless remove --verbose --region ${AWS_REGION} --stage ${AWS_STAGE} --aws-profile ${AWS_PROFILE} release_pull_request: image: codefresh/serverless:1.28 title: create a pull-request for release, if integration tests passed working_directory: ${{main_clone}}/examples/aws-node-simple-http-endpoint commands: - if [ ${INTEGRATION_FAILED} == false ]; curl -H 'Authorization: token ${GITHUB_TOKEN}' -d '{"title":"release of ${{CF_BRANCH}}","base":"master", "head":"${{CF_BRANCH}}"}' https://api.github.com/repos/${{CF_REPO_OWNER}}/${{CF_REPO_NAME}}/pulls; fi decide_on_status: image: alpine:3.7 title: decide on pipeline status commands: - if [ ${INTEGRATION_FAILED} == true ]; then echo "integration tests failed" && exit 1; fi 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 version : '1.0' steps : check_master : image : alpine : 3.7 title : fail on master branch commands : - echo "cannot run this pipeline on master" - exit 1 when : branch : only : - master setup : image : alpine : 3.7 title : generate AWS shared credentials file commands : - mkdir - p . aws - echo - n $ AWS_CREDENTIALS_FILE | base64 - d > $ { PWD } / . aws / credentials - cf_export AWS_SHARED_CREDENTIALS_FILE = $ { PWD } / . aws / credentials test : image : node : 10 - alpine title : lint and test working_directory : $ { { main_clone } } / examples / aws - node - simple - http - endpoint commands : - yarn lint - yarn test package : image : codefresh / serverless : 1.28 title : package serverless service working_directory : $ { { main_clone } } / examples / aws - node - simple - http - endpoint commands : - serverless package -- stage $ { AWS_STAGE } -- region $ { AWS_REGION } -- package $ { PACKAGE } archive : image : mesosphere / aws - cli title : archive package to S3 bucket working_directory : $ { { main_clone } } / examples / aws - node - simple - http - endpoint commands : - aws -- profile $ { AWS_PROFILE } -- region $ { AWS_REGION } s3 cp $ { PACKAGE } s3 : //${AWS_BUCKET}/${{CF_BRANCH}}/${{CF_SHORT_REVISION}}/ --recursive deploy : image : codefresh / serverless : 1.28 title : deploy to AWS with serverless framework working_directory : $ { { main_clone } } / examples / aws - node - simple - http - endpoint commands : - serverless deploy -- conceal -- verbose -- stage $ { AWS_STAGE } -- region $ { AWS_REGION } -- aws - profile $ { AWS_PROFILE } -- package $ { PACKAGE } integration : image : codefresh / serverless : 1.28 title : run integration test working_directory : $ { { main_clone } } / examples / aws - node - simple - http - endpoint fail_fast : false commands : - serverless invoke -- function currentTime -- stage $ { AWS_STAGE } -- region $ { AWS_REGION } -- path test / data . json && cf_export INTEGRATION_FAILED=false || cf_export INTEGRATION_FAILED=true rollback: image: codefresh/serverless:1.28 title: rollback if integration test failed working_directory: ${{main_clone}}/examples/aws-node-simple-http-endpoint commands: - ${INTEGRATION_FAILED} && echo "rollback to previous version on error" || true - ${INTEGRATION_FAILED} && if [ ! -z "${KEEP_VERSION}" ]; then serverless rollback -- verbose -- timestamp $ { KEEP_VERSION } -- region $ { AWS_REGION } -- stage $ { AWS_STAGE } -- aws - profile $ { AWS_PROFILE } ; fi | | true cleanup : image : codefresh / serverless : 1.28 title : cleanup allocated resources working_directory : $ { { main_clone } } / examples / aws - node - simple - http - endpoint commands : - serverless remove -- verbose -- region $ { AWS_REGION } -- stage $ { AWS_STAGE } -- aws - profile $ { AWS_PROFILE } release_pull_request : image : codefresh / serverless : 1.28 title : create a pull - request for release , if integration tests passed working_directory : $ { { main_clone } } / examples / aws - node - simple - http - endpoint commands : - if [ $ { INTEGRATION_FAILED } == false ] ; curl - H 'Authorization: token ${GITHUB_TOKEN}' - d '{"title":"release of ${{CF_BRANCH}}","base":"master", "head":"${{CF_BRANCH}}"}' https : //api.github.com/repos/${{CF_REPO_OWNER}}/${{CF_REPO_NAME}}/pulls; fi decide_on_status : image : alpine : 3.7 title : decide on pipeline status commands : - if [ $ { INTEGRATION_FAILED } == true ] ; then echo "integration tests failed" && exit 1; fi

If you want to achieve Continuous Deployment, you can also automate the merge of Pull Request and thus trigger a CI/CD pipeline for the production environment.

Production CI/CD pipeline

The Flow

1. Trigger

The production pipeline is triggered when a PullRequest is merged from some development branch. If you automate the merge process, you will be able to achieve Continuous Deployment

The first implicit action taken by the pipeline is the git clone command.

Then, the pipeline should get two contexts for proper execution:

commit context – commit details (commit id, master branch, modified files, etc.) environment context – secrets, configurations, etc. for production environment

2. Unit Test

This is a basic code validation step. In this step, the pipeline executes unit tests, linters, and static code analysis. It’s possible to skip this step.

3. Setup

This step is exactly the same step as in the development CI/CD pipeline. You will need to configure AWS credentials and specify the target environment (region, stage, etc.).

4. Package

The serverless package command packages the entire AWS Lambda infrastructure into the .serverless.production directory by default and make it ready for deployment. It is possible to specify another packaging directory by passing the --package option.

It is a good idea to create a deployment package and archive it for future use and/or traceability.

5. Deploy

The serverless deploy --package command deploys the entire service via CloudFormation, using the previously prepared package to a test environment.

6. Acceptance Tests

In this step, run non-destructive acceptance tests mainly. These can be integration tests from a development pipeline or different acceptance tests.

If acceptance tests complete with failures, the pipeline will rollback the deployment to the previous working version.

Codefresh production CI/CD pipeline: production.yaml

version: '1.0' steps: check_non_master: image: alpine:3.7 title: fail on master branch commands: - echo "cannot run this pipeline on non-master" - exit 1 when: branch: ignore: - master setup: image: alpine:3.7 title: generate AWS shared credentials file commands: - mkdir -p .aws - echo -n $AWS_CREDENTIALS_FILE | base64 -d > ${PWD}/.aws/credentials - cf_export AWS_SHARED_CREDENTIALS_FILE=${PWD}/.aws/credentials test: image: node:10-alpine title: lint and test working_directory: ${{main_clone}}/examples/aws-node-simple-http-endpoint commands: - yarn lint - yarn test package: image: codefresh/serverless:1.28 title: package serverless service working_directory: ${{main_clone}}/examples/aws-node-simple-http-endpoint commands: - serverless package --stage ${AWS_STAGE} --region ${AWS_REGION} --package ${PACKAGE} archive: image: mesosphere/aws-cli title: archive package to S3 bucket working_directory: ${{main_clone}}/examples/aws-node-simple-http-endpoint commands: - aws --profile ${AWS_PROFILE} --region ${AWS_REGION} s3 cp ${PACKAGE} s3://${AWS_BUCKET}/${{CF_BRANCH}}/${{CF_SHORT_REVISION}}/ --recursive deploy: image: codefresh/serverless:1.28 title: deploy to AWS with serverless framework working_directory: ${{main_clone}}/examples/aws-node-simple-http-endpoint commands: - KEEP_VERSION=$(serverless deploy list --stage ${AWS_STAGE} --region ${AWS_REGION} | grep Timestamp | tail -1 | awk '{print $3}') || true - cf_export KEEP_VERSION=${KEEP_VERSION} - serverless deploy --conceal --verbose --stage ${AWS_STAGE} --region ${AWS_REGION} --aws-profile ${AWS_PROFILE} --package ${PACKAGE} acceptance: image: codefresh/serverless:1.28 title: run acceptance test working_directory: ${{main_clone}}/examples/aws-node-simple-http-endpoint fail_fast: false commands: - serverless invoke --function currentTime --stage ${AWS_STAGE} --region ${AWS_REGION} --path test/data.json && cf_export ACCEPTANCE_FAILED=false || cf_export ACCEPTANCE_FAILED="true" rollback: image: codefresh/serverless:1.28 title: rollback if acceptance test failed working_directory: ${{main_clone}}/examples/aws-node-simple-http-endpoint commands: - ${ACCEPTANCE_FAILED} && echo "rollback to previous version on error" || true - ${ACCEPTANCE_FAILED} && if [ ! -z "${KEEP_VERSION}" ]; then serverless rollback --verbose --timestamp ${KEEP_VERSION} --region ${AWS_REGION} --stage ${AWS_STAGE} --aws-profile ${AWS_PROFILE}; fi || true decide_on_status: image: alpine:3.7 title: decide on pipeline status commands: - if [ ${ACCEPTANCE_FAILED} == true ]; then echo "acceptance tests failed, rollback to previous version" && exit 1; fi 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 version : '1.0' steps : check_non_master : image : alpine : 3.7 title : fail on master branch commands : - echo "cannot run this pipeline on non-master" - exit 1 when : branch : ignore : - master setup : image : alpine : 3.7 title : generate AWS shared credentials file commands : - mkdir - p . aws - echo - n $ AWS_CREDENTIALS_FILE | base64 - d > $ { PWD } / . aws / credentials - cf_export AWS_SHARED_CREDENTIALS_FILE = $ { PWD } / . aws / credentials test : image : node : 10 - alpine title : lint and test working_directory : $ { { main_clone } } / examples / aws - node - simple - http - endpoint commands : - yarn lint - yarn test package : image : codefresh / serverless : 1.28 title : package serverless service working_directory : $ { { main_clone } } / examples / aws - node - simple - http - endpoint commands : - serverless package -- stage $ { AWS_STAGE } -- region $ { AWS_REGION } -- package $ { PACKAGE } archive : image : mesosphere / aws - cli title : archive package to S3 bucket working_directory : $ { { main_clone } } / examples / aws - node - simple - http - endpoint commands : - aws -- profile $ { AWS_PROFILE } -- region $ { AWS_REGION } s3 cp $ { PACKAGE } s3 : //${AWS_BUCKET}/${{CF_BRANCH}}/${{CF_SHORT_REVISION}}/ --recursive deploy : image : codefresh / serverless : 1.28 title : deploy to AWS with serverless framework working_directory : $ { { main_clone } } / examples / aws - node - simple - http - endpoint commands : - KEEP_VERSION = $ ( serverless deploy list -- stage $ { AWS_STAGE } -- region $ { AWS_REGION } | grep Timestamp | tail - 1 | awk '{print $3}' ) | | true - cf_export KEEP_VERSION = $ { KEEP_VERSION } - serverless deploy -- conceal -- verbose -- stage $ { AWS_STAGE } -- region $ { AWS_REGION } -- aws - profile $ { AWS_PROFILE } -- package $ { PACKAGE } acceptance : image : codefresh / serverless : 1.28 title : run acceptance test working_directory : $ { { main_clone } } / examples / aws - node - simple - http - endpoint fail_fast : false commands : - serverless invoke -- function currentTime -- stage $ { AWS_STAGE } -- region $ { AWS_REGION } -- path test / data . json && cf_export ACCEPTANCE_FAILED=false || cf_export ACCEPTANCE_FAILED="true" rollback: image: codefresh/serverless:1.28 title: rollback if acceptance test failed working_directory: ${{main_clone}}/examples/aws-node-simple-http-endpoint commands: - ${ACCEPTANCE_FAILED} && echo "rollback to previous version on error" || true - ${ACCEPTANCE_FAILED} && if [ ! -z "${KEEP_VERSION}" ]; then serverless rollback -- verbose -- timestamp $ { KEEP_VERSION } -- region $ { AWS_REGION } -- stage $ { AWS_STAGE } -- aws - profile $ { AWS_PROFILE } ; fi | | true decide_on_status : image : alpine : 3.7 title : decide on pipeline status commands : - if [ $ { ACCEPTANCE_FAILED } == true ] ; then echo "acceptance tests failed, rollback to previous version" && exit 1; fi

Summary

Using both Serverless Framework and Codefresh can make it easy to create a highly effective CI/CD pipelines for serverless applications.

I hope you find this post useful. I look forward to your comments and any questions you have. To try this out, create a free Codefresh account and start building, testing and deploying Docker images faster than ever.

Go on and give it a try!