One reason users choose Spinnaker is that it provides out-of-the-box support for advanced rollout strategies, such as red/black (also known as blue/green) and dark deployments.

You can use these rollout strategies with many of Spinnaker’s cloud providers, including Amazon EC2, Google Compute Engine, and the Kubernetes V1 provider. We’re now excited to announce first-class support for several common rollout strategies in Spinnaker’s Kubernetes V2 provider. As of Spinnaker version 1.14, you can configure red/black, highlander, and dark deployments in the Deploy (Manifest) stage.

In this post, we walk through a sample deployment workflow leveraging a red/black strategy for a simple web application.

About these rollout strategies

Rollout strategies allow teams to deploy more safely and reliably.

With a dark deployment, you can test the new application version before sending user traffic to it.

With a red/black deployment, you keep the previous version available as a hot standby to enable painless rollbacks.

With a highlander deployment, previous versions are destroyed to conserve resources.

Spinnaker-orchestrated rollout strategies differ from the built-in Kubernetes Deployment strategies. Spinnaker abstracts the logic of common rollout patterns into easy-to-manage, reusable stages. You don’t need to worry about how to include your strategy configuration in each manifest, and you don’t have to manually manipulate labels in an error-prone series of kubectl calls just to execute a simple red/black rollout.

Example red/black workflow

The following diagrams illustrate the progression of a red/black rollout. In the next section, we will implement this pattern using Spinnaker!

First, we have a single workload taking traffic from a load balancer:

Next, we deploy a new version of our software with a second workload:

When our new workload is ready, the load balancer begins sending it client traffic. Our old workload remains ready to receive traffic in case we encounter any issues with our new workload and need to roll back:

Implement a red/black workflow in Spinnaker

First, let’s assume we have already deployed a Service manifest with arbitrary selector labels from which we would like our workloads to receive traffic:

kind: Service

apiVersion: v1

metadata:

name: maggie-k8s-demo

namespace: default

spec:

selector:

app: maggie-k8s-demo

type: LoadBalancer

ports:

- protocol: TCP

port: 80

targetPort: 8000

Next, let’s set up a new pipeline:

Our pipeline has a single Deploy (Manifest) stage to deploy our ReplicaSet manifest (check out our documentation for more information on why we are using ReplicaSets):

apiVersion: apps/v1beta2

kind: ReplicaSet

metadata:

name: maggie-k8s-demo

namespace: default

labels:

applicationName: maggie-k8s-demo

spec:

replicas: 1

selector:

matchLabels:

applicationName: maggie-k8s-demo

template:

metadata:

labels:

applicationName: maggie-k8s-demo

spec:

containers:

- name: primary

image: gcr.io/spinnaker-maggie/spinnaker-kubernetes-demo

ports:

- containerPort: 8000

readinessProbe:

httpGet:

path: /

port: 8000

Next, let’s add our ReplicaSet manifest and its image as expected artifacts to our pipeline configuration:

These artifacts can then be associated with triggers so that our pipeline runs automatically, following updates to our manifest in our GitHub repository or to our image in Google Container Registry:

Next let’s configure our pipeline’s only stage, a Deploy (Manifest) stage:

As you can see, we’ve selected our previously configured GitHub artifact as our Manifest Artifact, and our previously configured Docker image artifact as a Required Artifact to Bind.

Now for the exciting part: let’s explore the options we have available in the new Rollout Strategy Options section! The options in this section determine which Services will be associated with the workload, whether traffic should be directed to the new workload right away, and how Spinnaker will handle any pre-existing workloads in the cluster.

We’ve selected our previously deployed Service as the load balancer from which our ReplicaSet will receive traffic. Spinnaker acknowledges the relationship between a Service and a ReplicaSet by adding a special annotation to the ReplicaSet before it is deployed.

Next, if we have checked the box next to Traffic, Spinnaker will handle the label manipulation such that our Service sends requests to this ReplicaSet’s pods as soon as they are ready to take traffic.

Finally, we have selected a red/black strategy, which means Spinnaker takes care of the label manipulation necessary to disable any existing ReplicaSets deployed in the same cluster and namespace.

If we update our ReplicaSet manifest or Docker image, Spinnaker triggers our pipeline, deploying our new ReplicaSet with the appropriate annotation and labels. Our new ReplicaSet starts taking client traffic, and Spinnaker handles disabling our old ReplicaSet.

The Rollout in Action

The following screenshots illustrate the progression of this rollout as seen in Spinnaker’s Clusters view:

Before deployment:

Deployment of new version started (not yet receiving traffic):

Traffic switched to new version (old version still available for fast rollback):

Thanks for reading! For more details on each strategy and available configuration options, check out our docs and sample pipelines!