There is no shortage now of development and CI/CD tools for cloud-native application development. Developers are being pampered with options, especially when it comes to managing the CD pipeline in conjunction with Kubernetes—one of which is Skaffold.

As mentioned in multiple articles before this, Kubernetes offers a lot of flexibility, but it isn’t the easiest deployment environment to work with. In the old days, minor code changes required the rebuilding of containers and redeployment of container images.

Until Skaffold, a command-line tool designed specifically for streamlining the process. Skaffold actually turns Kubernetes development into a real-time process, actively monitoring local code sources and handling the rebuilding and redeployment of applications automatically.

A Quick Overview

Before we get to how Skaffold can be used to create a real-time development experience and further optimize your CI/CD workflow, we have to understand a few things about the tool. First of all, Skaffold was originally designed to simplify application build and deployment processes. Today, however, the tool offers so much more than that.

Skaffold takes care of a few important steps. Once code changes are detected from a local source, it automatically builds artifacts using common tools and strategies. Skaffold can build:

A local Dockerfile

A Dockerfile in-cluster (kaniko)

A Dockerfile on the cloud (Google Cloud Build)

The artifacts are then tested and tagged before they are published. The extra steps prevent bad code changes from harming your staging and production environments. Skaffold even supports Go templates and environment variables.

Once the artifacts are published, Skaffold takes care of deploying them using tools such as kubectl and Helm. Everything is done on your side rather than in the cloud cluster, so Skaffold always performs at its best and doesn’t put a lot of stress on your environment.

These features certainly make Skaffold handy as a way to develop your Kubernetes apps in real-time. You can set up a staging environment to test codes directly. Some development teams even integrate Skaffold into their production-ready CI/CD pipelines. The approach allows for a faster and more effective iterative development. Furthermore, where you deploy is entirely your choice and you can have local ss well as remote clusters in the same pipeline easily.

The Skaffold Pipeline

The Skaffold pipeline is controlled by its main configuration file. Skaffold config file, skaffold.yaml, contains global configurations and commands associated with the Skaffold API. You can define a few key configurations, including:

apiVersion, which dictates the Skaffold API version you want to use

kind, in this case, Config

build, which includes details on how Skaffold builds artifacts. This is where you define the tools you want to use and how artifacts are reviewed, tagged, and pushed for further processing

test, which of course specifies the testing every artifact must go through (i.e. container-structure-test from Google Container Tools)

deploy, which lets you choose between kubectl, Helm, and Kustomize

profiles, which allows you to set up pre-defined profiles and override the previous configs

As long as skaffold dev or skaffold debug are running, you have access to the Skaffold API. These configurations will dictate how code changes are processed along the deployment pipeline, not fully automated by the tool.

A Closer Look

From the earlier explanation, it is easy to see how Skaffold divides the CD pipeline into three main components: the build or artifacts block, the push block (test/tag/push), and the deploy block. The process of automating deployment in real-time happens in the artifacts block. This is where Skaffold transforms traditional CI/CD pipelines and makes them more agile.

Check out this architectural diagram for a fantastic overview of Skaffold architecture:



Dockerfile is still the most common tool used at this stage, but you can also create artifacts using Bazel and Jib. Bazel, for instance, offers high-level, human-readable language for building artifacts and images, which makes the process even more manageable. Jib, on the other hand, is part of the Google Container Tools. It allows for Docker images to be built from within Maven or Gradle.

Sync is the command that makes magic happen. Skaffold natively supports copying changed files to deployed containers by simply creating an archive and sending it to the target container. You have to activate sync to fully benefit from this feature, and there are a few configurations to be made as well.

Inferred sync mode is the simplest way of using sync. Skaffold knows how to extract the target containers from your Dockerfile, so instead of creating a set of one-to-one sync rules, you simply specify the files that you want to be synchronized in the pipeline.

Here’s a simple example. Let’s say your project has a static-html folder that you want to sync. You first add:

COPY static-html static/

to your Dockerfile. This is how you tell Skaffold that you want that folder synchronized in real-time. Next, you need to add:

build: artifacts: - image: gcr.io/k8s-skaffold/your-node context: node sync: infer: - 'static-html/*.html'

to your skaffold.yaml. The combination means all html files in your local static-html folder will be synchronized and pushed to your <WORKDIR>/static folder in the container. With the configuration in place, everything else happens automatically and in real-time.

The possibilities are endless with Skaffold’s sync. You can synchronize your .firebaserc file to /etc/ in your container for easy configuration update. You can also use sync to synchronize static and dynamic parts of your application. Whenever new changes are made, Skaffold will perform its duty right away.

The push block handles the transition between your local code source or repo and your cloud environment. Several things happen in this block, including testing and tagging. Skaffold supports Git Commit IDs with detailed logging, so you don’t have to make additional changes to your existing CI/CD pipeline.

Once artifacts are tested and flagged, it is pushed to the cloud for deployment. The push function of Skaffold hands over the images and all supporting resources to the deploy block. The support for kubectl means Skaffold works seamlessly with cloud services like Google Kubernetes Engine.

Before you can deploy using kubectl, however, you need to define your manifests. Skaffold also supports options like disableValidation if you want the pipeline to be looser. For rapid development or staging purposes, the option can be very useful.

Support for Helm adds another layer of automation to Skaffold. Since Helm is mostly used for production-level deployment, it is easy to imagine how Skaffold+Helm combination can be utilized for rapid deployment of updates and new codes. You can also override values per environment from the tool.

Helm uses releases for managing Helm releases. From Skaffold’s configuration, you have a wealth of options that can be configured. You can specify your Kubernetes namespace, define key-value pairs to use using setFiles, and even send a –recreate-pods command by setting recreatePods to true.

ImageStrategy is a particularly interesting option. It allows you to add image configurations to your Helm values file. You can even configure the Helm build dependencies and dig deep into the Helm charts for fully automated deployment. The skipBuildDependencies option gives you plenty of room to customize how new codes are deployed to production environment.

As an added bonus, you can run deployment pipelines that are not tied to templates by integrating Kustomize instead of kubectl or Helm. Skaffold simply runs commands of Kustomize in an automated succession, so you only need to configure the pipeline once and forget about it.

Integrating Skaffold

The true power of Skaffold lies in its ability to synchronize – package, test, and deploy – code changes without additional inputs. Implementing shorter and more focused continuous development cycles on top of Kubernetes as an orchestration platform suddenly becomes incredibly simple, regardless of your existing CI/CD pipelines.



Caylent provides a critical DevOps-as-a-Service function to high growth companies looking for expert support with Kubernetes, cloud security, cloud infrastructure, and CI/CD pipelines. Our managed and consulting services are a more cost-effective option than hiring in-house, and we scale as your team and company grow. Check out some of the use cases, learn how we work with clients, and read more about our DevOps-as-a-Service offering.

References

GoogleContainerTools/skaffold. (2020). Retrieved 28 February 2020, from https://github.com/GoogleContainerTools/skaffold/blob/master/docs/static/images/architecture.png

