This content refers to Spinnaker’s legacy Kubernetes provider (V1), which is scheduled for deletion in Spinnaker 1.21. We recommend using the manifest-based provider (V2) instead.

Over the last couple of weeks I’ve been working on standing up a Spinnaker instance for our development teams to use when deploying applications to Kubernetes. In an effort to support a DevOps culture at Skuid, the team I’m on has been working on supplying those teams with tools to support deployment and visibility into production. During our evaluation of Spinnaker we’ve found that certain Spinnaker resources map directly to Kubernetes but those mappings aren’t immediately obvious. I thought it would be useful to put them down in one place for myself and anyone else who may be in the same boat.

What is Spinnaker?

Spinnaker is a solution for supporting CD (Continuous Deployment) operations for software engineering teams. Developed originally by Netflix, it enabled teams to deploy software to multiple environments such as AWS, Google Cloud Platform, OpenStack and Kubernetes. To give you an idea of the type of scale it can support, Spinnaker is responsible for managing deployment of 95% of all deployments at Netflix which includes upwards of 3000 deployments, daily (source). From an operations perspective, it is a tool to enable development teams to safely and efficiently develop deployment pipelines and montior applications running in multiple environments from a single interface.

Regions (Namespace)

Regions in Spinnaker relate to namespaces in Kubernetes. Regions are more applicable when working with federated Kubernetes clusters as you likely have nodes running in more than just 1 region (nodes in `us-west-2` and `us-east-1` for instance.

Accounts

When working with Spinnaker, an Account directly relates to a Cluster as defined by the `.kube/config` file. This file is what we use to tell `kubectl` and other Kubernetes clients where our clusters are and how to communicate with them. Most Kubernetes clients do a great job at respecting this file so configuring them is as easy as configuring `kubectl`. The component in Spinnaker that is responsible for interacting with our Kubernetes cluster is called Clouddriver. It reads our `.kube/config` file and adds each “Cluster” entry as an Account in Spinnaker as long as it was listed in our Clouddriver configuration.

Clusters & Server Groups (ReplicaSet/Deployment)

Clusters are the Spinnaker name for Replica Sets and Deployments. If you were to start a new Spinnaker instance against 1 or more Kubernetes clusters, Spinnaker would read the information about these Replica Sets and Deployments and create new Applications for each one. Also, to create a new Replica Set or Deployment for an Application, you would create a new Server Group and define your application just as you would a Replica Set. Spinnaker defaults to using Replica Sets for applications unless you specifically denote that it should be a Deployment. One of the downsides I’ve seen, however, is that Deployments aren’t as well supported Replica Sets which leads to some holes in functionality. For instance, you cannot resize Deployments via the Spinnaker UI, but this is a known limitation. Github Issue #1460

Better support for Deployments are planned however, as their presence in Kubernetes was less prevelant at the time of the initial implementation. For the Kubernetes community, this means there’s a great opportunity to come in and contribute!

Load Balancers (Service)

Load Balancers are probably the most easily identifiable component when mapping Spinnaker resources to Kubernetes. They define Services in your Kubernetes cluster. When creating a Load Balancer, you are presented with all the different Service options including Port, Target Port and Type. Selectors are defined by Service Annotations and map to the key/value pairs that match the Service to a Replica Set or Deployment.

Security Groups (Ingress)

One component of Kubernetes that I’ve recently been experiementing with is Ingress. Ingress defines a set of rules to route traffic via an Ingress Controller, like Traefik or, more recently, Linkerd. The most useful aspect of Ingress is that you can specify a single point of ingress to your cluster (Service of type Load Balancer) and expose multiple services.

In Spinnaker, to create these Ingress resources, you would create a Security Group. The wizard in the Spinnaker UI presents all the options you would be able to set on an Ingress resource like Host and Path. Then you would point these rules at a specific Load Balancer (Service). Ingress Controllers such as Traefik would pick up on these rules and start routing traffic to the appropriate service.

In the future, there are plans to merge Ingress into Load Balancer configuration and have Security Groups take over managing Network Policies which is great news for users of Calico and the like.

Conclusion

Spinnaker has made a name for itself as a scalable tool that can support any number of requirements. For Kubernetes, this means that operations teams get the resiliency they have come to expect from Kubernetes along with a simple interface to deploying and managing applications hosted on those clusters.

Overall, I’m really impressed with how Spinnaker fits right into the hole that we’ve found in our operations and I hope to see the support for Kubernetes grow over time.

Huge thanks to Lars Wander and everyone in the Kubernetes channel on the Spinnaker Slack for answering my questions and reviewing this post!