Update: I have recently switched to Okteto from Telepresence. I like it better, check it out as well.

Introduction

I have spent quite a bit of time lately trying to polish my setup and workflow for Rails web development; a lot has changed since I started working with containers and Kubernetes, so I had to adapt. I am very happy with the end result, and I’d like to share in this post what I am doing now as I am sure this can save time to others and answer some questions others may have.

Once upon a time, doing Rails development for me meant installing all my app’s dependencies on my Mac including Ruby, databases, things like Redis and more. This mostly worked, but it also meant developing on a platform which at times can be quite different from the platform I deploy to, and I’ve happened to run into issues a few times because of this. There’s a lot to like about containers, but I love them particularly because they enable me to develop in an environment which is almost identical to production, and also because of the simplicity with which I can set up throw away environments. Containers make it very easy to pack all the libraries and components an app needs to run, making sure the app always runs in exactly the same consistent way regardless of where it is running. And containers can be used to run external dependencies as well.

When I started using Docker, I was just using Docker Compose to set things up on my Mac. However, this no longer works for me because I am deploying to Kubernetes and my app needs to interact with the Kubernetes API dynamically in order to manage things like ingresses and certificates for custom domains. I’ve recently tried tools like Garden and Skaffold (there are others) which are very interesting because they allow for continuous deployment of an app to a local or remote Kubernetes cluster during development, so that each change can be tested directly in the cluster. The idea is nice, but I found the process to be rather slow, because each time a change is made to the code the image has to be rebuilt, pushed to a registry and deployed with kubectl or Helm to the cluster. I wanted a faster interaction, and while searching I discovered Telepresence. This tools takes a different approach, in that it doesn’t do continuous deployment to the cluster, but creates a sort of tunnel between the Kubernetes cluster and processes running on my local machine, in such a way that my machine behaves as if it were part of the cluster itself: it can reach services otherwise reachable only from within the Kubernetes cluster and can itself be reached by other services in the cluster; it can access environment variables, secrets, config maps and even persistent volumes! It’s kinda magic and I love it! The workflow is a lot faster because I can deploy to the cluster, and then start this connection between the cluster and app containers running locally. Telepresence “swaps” the actual deployments in Kubernetes with local containers so that all the requests are forwarded to and processed by these containers. It works really well! Let’s see how I set things up and how I use Telepresence. The instructions below are based on a Ruby on Rails app but can be adapted to other workloads.

Dockerfile

The first step is to “containerize” the app. How you write your Dockerfile is quite important, because an “unoptimised” Dockerfile can lead to very big images that have to be rebuilt from scratch often, slowing things down. Instead, since Docker builds images as “layers” - where each layer is created with a particular instruction in the Dockerfile - a good Dockerfile can leverage caching of layers as well as reduce the final image size considerably. For Rails apps there’s a Ruby image, which comes in various flavours. The most convenient is the one based on the Alpine Linux distro, since it’s very very small compared to the default image which is based on Debian. There’s an important aspect of Alpine-based images to take into account: Alpine uses musl instead of glibc so some software may have problems running in Alpine. However, with most apps this isn’t an issue, so by using Alpine you can benefit from smaller images which are quicker to build, push and deploy.

This is my current Dockerfile:

ARG RUBY_VERSION=2.6.1 FROM ruby:$RUBY_VERSION-alpine as development RUN apk add --no-cache \ git build-base yarn nodejs mariadb-dev imagemagick \ chromium-chromedriver chromium tzdata \ && rm -rf /var/cache/apk/* ENV RAILS_ENV=development ENV RACK_ENV=development ENV RAILS_LOG_TO_STDOUT=true ENV RAILS_ROOT=/app ENV LANG=C.UTF-8 ENV GEM_HOME=/bundle ENV BUNDLE_PATH=$GEM_HOME ENV BUNDLE_APP_CONFIG=$BUNDLE_PATH ENV BUNDLE_BIN=$BUNDLE_PATH/bin ENV PATH=/app/bin:$BUNDLE_BIN:$PATH WORKDIR /app COPY Gemfile Gemfile.lock ./ RUN gem install bundler \ && bundle install -j "$(getconf _NPROCESSORS_ONLN)" \ && rm -rf $BUNDLE_PATH/cache/*.gem \ && find $BUNDLE_PATH/gems/ -name "*.c" -delete \ && find $BUNDLE_PATH/gems/ -name "*.o" -delete COPY package.json yarn.lock ./ RUN yarn install COPY . ./ EXPOSE 3000 CMD ["bundle", "exec", "puma", "-Cconfig/puma.rb"] # Production FROM ruby:$RUBY_VERSION-alpine as production RUN apk add --no-cache mariadb-dev imagemagick nodejs yarn tzdata \ && rm -rf /var/cache/apk/* WORKDIR /app ENV RAILS_ENV=production ENV RACK_ENV=production ENV RAILS_LOG_TO_STDOUT=true ENV RAILS_SERVE_STATIC_FILES=true ENV RAILS_ROOT=/app ENV LANG=C.UTF-8 ENV GEM_HOME=/bundle ENV BUNDLE_PATH=$GEM_HOME ENV BUNDLE_APP_CONFIG=$BUNDLE_PATH ENV BUNDLE_BIN=$BUNDLE_PATH/bin ENV PATH=/app/bin:$BUNDLE_BIN:$PATH ENV SECRET_KEY_BASE=blah COPY --from=development /bundle /bundle COPY --from=development /app ./ RUN RAILS_ENV=production bundle exec rake assets:precompile RUN rm -rf node_modules tmp/* log/* app/assets vendor/assets lib/assets test \ && yarn cache clean RUN apk del yarn EXPOSE 3000 CMD ["bundle", "exec", "puma", "-Cconfig/puma.rb"]

Let’s go through this file in detail. First we specify that we want to base our image on the Alpine version of the Ruby image:

ARG RUBY_VERSION=2.6.1 FROM ruby:$RUBY_VERSION-alpine as development

Also note that we are calling this stage ‘development’. In fact we are going to build a “multi-stage” image with two stages, one for an image that contains everything needed during development, and a final version, smaller than the original one, which will be used in production.

The next RUN instruction installs some packages required for things to work.

RUN apk add --no-cache \ git build-base yarn nodejs mariadb-dev imagemagick \ chromium-chromedriver chromium tzdata \ && rm -rf /var/cache/apk/*

So we install:

git, so that Bundler can fetch and install Ruby gems

build-base, required to compile some stuff also for Ruby gems

nodejs to compile assets and “packs” with Webpacker

mariadb-dev, so that we can install and use the MySQL Ruby gem; of course this may be changed to the equivalent for PostgreSQL or other database in use

imagemagick - optional - to manage images with ActiveStorage

chromium-chromedriver and chromium to run system tests with Capybara

tzdata for time zone stuff, also required by Rails or some gem (can’t remember).

Note that after installing these packages we clear the apk cache, which will help reduce the final image size.

Next, we default some environment variables for development and bundler:

ENV RAILS_ENV=development ENV RACK_ENV=development ENV RAILS_LOG_TO_STDOUT=true ENV RAILS_ROOT=/app ENV LANG=C.UTF-8 ENV GEM_HOME=/bundle ENV BUNDLE_PATH=$GEM_HOME ENV BUNDLE_APP_CONFIG=$BUNDLE_PATH ENV BUNDLE_BIN=$BUNDLE_PATH/bin ENV PATH=/app/bin:$BUNDLE_BIN:$PATH

The next thing we do is copy the Gemfile and install the gems with Bundler:

WORKDIR /app COPY Gemfile Gemfile.lock ./ RUN gem install bundler \ && bundle install -j "$(getconf _NPROCESSORS_ONLN)" \ && rm -rf $BUNDLE_PATH/cache/*.gem \ && find $BUNDLE_PATH/gems/ -name "*.c" -delete \ && find $BUNDLE_PATH/gems/ -name "*.o" -delete

As you can see, we use the -j parameter for the bundle command so to use as many paralles jobs when installing the gems, as the number of cores available. Once the gems are installed, we delete both the gem cache and the temporary C files generated when compiling native extensions for some gems.

Next, we install the node modules with yarn:

COPY package.json yarn.lock ./ RUN yarn install

We then copy the app’s code to the image:

COPY . ./

There’s a good reason why we first copy just the Gemfile and install the gems, then copy the package.json and install the node modules, and finally copy the complete code separately soon after: this is to leverage layers caching as I mentioned earlier. So once the image has been built up to the layer in which we install the gems, we’ll have to rebuild that layer and the previous only if either the Gemfile has changed, or any of the previous instructions in the Dockerfile have changed; same thing for yarn and node modules. We copy the full code after these two steps so that if anything in the code changes apart from the Gemfile and the package.json, Docker doesn’t need to rebuild up the previous layers and we can just quickly update the code in the image.

We then expose the port 3000 - which is Rails’ default web port - and specify that we want to run the Rails server with puma:

EXPOSE 3000 CMD ["bundle", "exec", "puma", "-Cconfig/puma.rb"]

These steps so far complete an image that can be used for development and running tests. The following instructions in the Dockerfile are to reduce the image size by removing what’s not needed in production. First we call this stage production:

FROM ruby:$RUBY_VERSION-alpine as production

We give names to stages for two reasons: first, so that one stage can reference previous stages, second so that when building the image we can instruct Docker to stop to either stage so we can obtain either a development image or a production image. We’ll see this later in the most frequent tasks and the Makefile.

Note that when Docker executes the FROM instruction, the build basically starts again from a fresh Ruby-Alpine image, ignoring for now what’s been built before for the development stage. This is until one or more COPY instructions tell Docker to reference the content of the previous stage. So because Docker starts with a fresh Ruby image for the production stage, we need to install again those few packages that are still required also in production, and then clear the apk cache again:

RUN apk add --no-cache mariadb-dev imagemagick nodejs yarn tzdata \ && rm -rf /var/cache/apk/*

We now set the same environment variables as before except we set the Rails environment to production and specify whatever value for the secret key base, since this is required for the asset precompilation rake task (we’ll set up the proper secret later):

ENV RAILS_ENV=production ENV RACK_ENV=production ENV RAILS_LOG_TO_STDOUT=true ENV RAILS_SERVE_STATIC_FILES true ENV RAILS_ROOT=/app ENV LANG=C.UTF-8 ENV GEM_HOME=/bundle ENV BUNDLE_PATH=$GEM_HOME ENV BUNDLE_APP_CONFIG=$BUNDLE_PATH ENV BUNDLE_BIN=$BUNDLE_PATH/bin ENV PATH=/app/bin:$BUNDLE_BIN:$PATH ENV SECRET_KEY_BASE=blah

Note that we always tell Rails to send logs to STDOUT (so that we can see the logs with the logs command in both Docker and Kubernetes) and to serve static assets, since we won’t be fronting Rails with something like Nginx or other to serve the assets directly (in Kubernetes we’ll use the Nginx ingress controller, but we won’t have a specific config for serving the app’s assets, which is fine).

Next, we copy both the ready bundle and the code from the previous stage, so we have gems already installed:

COPY --from=development /bundle /bundle COPY --from=development /app ./

Then we precompile the assets so they are ready to be served in production:

RUN RAILS_ENV=production bundle exec rake assets:precompile

Next, we delete directories not needed in production as well as the yarn cache (which alone saves 100-200MB of space in our image!), and uninstall yarn:

RUN rm -rf node_modules tmp/* log/* app/assets vendor/assets lib/assets test \ && yarn cache clean RUN apk del yarn

Finally, like for the development stage we expose the port 3000 and specify that by default the container should start the Rails server:

EXPOSE 3000 CMD ["bundle", "exec", "puma", "-Cconfig/puma.rb"]

This Dockerfile is the result of some research and I am hapy with it. The development image is around 500+ MB big while the production image is < 200MB, which is good enough for me. Also, the instructions are such that we can benefit from a good caching of the layers when rebuilding the image.

To build for development:

docker build --target development -t <registry>/<username>/my-app-dev .

For production:

docker build --target production -t <registry>/<username>/my-app .

Setting up a development Kubernetes cluster with K3s

There are various ways to set up a Kubernetes cluster that we can use for development, but my favourite is using Rancher’s K3s Kubernetes distribution because it’s certified (just a few differences from the upstream Kubernetes which need to be taken into account) and incredibly lightweight! It’s a complete Kubernetes distro in a single small binary. Thanks to K3s’ low CPU/RAM usage, for development I can use a single node cluster with just 2 cores and 4 GB of ram, which is really cheap (I use Hetzner Cloud and this costs me 6 euros per month).

Installing K3s is super easy using the official script:

curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="--no-deploy=traefik" sh -

Note that in the command above I make sure that Traefik ingress controller is not installed automatically (because otherwise it’s installed by default) because I prefer Nginx as ingress controller. You may remove that parameter if you are happy with Traefik. Once K3s is up and running, you need to copy the kubeconfig file to your dev machine so that you can manage the cluster with kubectl:

ssh <server name or IP> "sudo cat /etc/rancher/k3s/k3s.yaml" > ~/.kube/config-<cluster name> sed -i -e "s/localhost/<server IP>/g" ~/.kube/config-<cluster name> sed -i -e "s/default/<cluster name>/g" ~/.kube/config-<cluster name>

Done that set the KUBECONFIG environment variable to ~/.kube/config- so that kubectl and Helm can interact with the cluster. To test, run:

kubectl get nodes

You will need to manage persistent volumes for things like MySQL/Postgres/Redis/etc, so you need to install some software to manage storage or at least a driver for your cloud provider’s block storage, if it is offered. I did a comparison of several storage solutions for Kubernetes so see this post and this one for more info which may help you choose. I currently use a CSI driver for Hetzner Cloud’s volumes as well as Linstor, which I recommend if your provider doesn’t offer block storage.

Next, we need to install Tiller, which is Helm’s server side component and is required to install apps with Helm:

kubectl create ns tiller kubectl -n tiller create serviceaccount tiller kubectl create clusterrolebinding tiller --clusterrole=cluster-admin --serviceaccount=tiller:tiller helm init --tiller-namespace tiller --history-max 200 --service-account tiller --override 'spec.template.spec.containers[0].command'='{/tiller,--storage=secret}' kubectl -n tiller rollout status deploy/tiller-deploy

Alternatively, you may also install a plugin for “Tiller-less” Helm (basically Tiller is running locally).

If you have installed K3s without Traefik, run the following to install Nginx ingress controller:

helm install --tiller-namespace tiller stable/nginx-ingress --namespace nginx-ingress --set controller.kind=DaemonSet,controller.hostNetwork=true,controller.service.type=ClusterIP

You are likely going to test apps with https, so you can install cert-manager which will manage TLS certificates for you using Let’s Encrypt:

kubectl apply -f https://raw.githubusercontent.com/jetstack/cert-manager/release-0.9/deploy/manifests/00-crds.yaml kubectl create namespace cert-manager kubectl label namespace cert-manager certmanager.k8s.io/disable-validation=true helm repo add jetstack https://charts.jetstack.io helm repo update helm install --tiller-namespace tiller \ --name cert-manager \ --namespace cert-manager \ --version v0.9.1 \ jetstack/cert-manager

Optionally, if you want to use DNS verification with Let’s encrypt you need to create a secret for use with the certificate issuers. For example for Cloudflare I create this secret:

API_TOKEN=... cat <<EOF | kubectl apply -f - apiVersion: v1 kind: Secret metadata: name: cloudflare-api-key namespace: cert-manager type: Opaque data: api-key: $(echo -n "$API_TOKEN" | base64) EOF

Then I create the issuers:

cat <<EOF | kubectl apply -f - apiVersion: certmanager.k8s.io/v1alpha1 kind: ClusterIssuer metadata: name: letsencrypt-prod spec: acme: email: ... privateKeySecretRef: name: letsencrypt-prod-account-key server: https://acme-v02.api.letsencrypt.org/directory http01: {} dns01: providers: - name: cloudflare cloudflare: email: ... apiKeySecretRef: name: cloudflare-api-key key: api-key --- apiVersion: certmanager.k8s.io/v1alpha1 kind: ClusterIssuer metadata: name: letsencrypt-staging spec: acme: email: ... privateKeySecretRef: name: letsencrypt-staging-account-key server: https://acme-staging-v02.api.letsencrypt.org/directory http01: {} dns01: providers: - name: cloudflare cloudflare: email: ... apiKeySecretRef: name: cloudflare-api-key key: api-key EOF

You can then reference either the staging issuer or the production issuer in your ingresses/certificates.

I typically use MySQL as database, so I like the PressLabs MySQL Operator since it manages replication well and has backups to S3 compatible storage built in. There are other options but this is the easiest I think. To install the operator, run:

helm repo add presslabs https://presslabs.github.io/charts helm install --tiller-namespace tiller presslabs/mysql-operator \ --name mysql-operator \ --namespace mysql \ --set orchestrator.persistence.enabled=true,orchestrator.persistence.storageClass=<your storage class>,orchestrator.persistence.size=1Gi

Next you need to create some secrets including the root password for MySQL and the credentials to access the S3 bucket:

AWS_ACCESS_KEY_ID=... AWS_SECRET_ACCESS_KEY=... AWS_REGION=... S3_ENDPOINT=https://... ROOT_PASSWORD=... cat << EOF | kubectl apply -f - apiVersion: v1 kind: Secret metadata: name: mysql-secret namespace: mysql type: Opaque data: ROOT_PASSWORD: $(echo -n "$ROOT_PASSWORD" | base64) --- apiVersion: v1 kind: Secret metadata: name: mysql-backup-secret namespace: mysql type: Opaque data: AWS_ACCESS_KEY_ID: $(echo -n "$AWS_ACCESS_KEY_ID" | base64) AWS_SECRET_ACCESS_KEY: $(echo -n "$AWS_SECRET_ACCESS_KEY" | base64) AWS_REGION: $(echo -n "$AWS_REGION" | base64) S3_ENDPOINT: $(echo -n "$S3_ENDPOINT" | base64) EOF

Finally, create the MySQL cluster which will have only the master since we have a single node cluster for our development environment:

REPLICAS=1 BACKUP_SCHEDULE="0 0 5 * * *" BACKUP_RETENTION=30 BACKUP_BUCKET="..." STORAGE_CLASS=... STORAGE=10Gi cat << EOF | kubectl apply -f - apiVersion: mysql.presslabs.org/v1alpha1 kind: MysqlCluster metadata: name: mysql-cluster namespace: mysql spec: replicas: $REPLICAS secretName: mysql-secret backupSchedule: "$BACKUP_SCHEDULE" backupURL: s3://$BACKUP_BUCKET/presslabs backupSecretName: mysql-backup-secret backupScheduleJobsHistoryLimit: $BACKUP_RETENTION volumeSpec: persistentVolumeClaim: storageClassName: $STORAGE_CLASS accessModes: [ "ReadWriteOnce" ] resources: requests: storage: $STORAGE EOF

The last dependency I have for my current Rails app is Redis, which can be installed easily with:

helm install --tiller-namespace tiller stable/redis \ --name redis \ --namespace redis \ --set cluster.enabled=false,usePassword=false,master.persistence.enabled=true,master.persistence.storageClass=...,master.persistence.size=...

Deployment to Kubernetes with Helm and Helmfile

To deploy our app to the cluster of course we’ll use Helm, but together with another tool called Helmfile which makes it possible to use the same chart with multiple environments, more easily. To install Helmfile on Mac with Homebrew, for example, run:

brew install helmfile

We also need to install the secrets plugin for Helm so that we can manage secrets for each environment:

helm plugin install https://github.com/futuresimple/helm-secrets

We can now create the Helm chart. I like to keep the chart in the same repository as the app’s code, so I create the necessary files in the helm subdirectory with this structure:

helmfile.yaml helm ├── chart │ ├── Chart.yaml │ ├── templates │ │ ├── deployment-web.yaml │ │ ├── deployment-worker.yaml │ │ ├── ingress.yaml │ │ ├── secret.yml │ │ └── service-web.yaml │ └── values.yaml ├── helmfiles │ └── 00-my-app.yaml └── values └── my-app └── dev ├── secrets.yaml └── values.yaml

So we have the usual Chart.yaml with the name and description of the chart, a values.yaml with the default values for all the variables referenced in the chart, and some templates for the deployment of both the web app and the background worker (I use Sidekiq), the ingress and service for the web app, and a shared secret. Then we have some config for Helmfile and the different values and secrets for each environment, for now just dev. Let’s see what we have in these files. I am assuming here some familiarity with Helm charts so I’ll just paste the content.

helmfile.yaml

helmfiles: - "helm/helmfiles/00-my-app.yaml" environments: dev: prod:

helm/Chart.yaml

apiVersion: v1 appVersion: "1.0" description: A Helm chart for my-app name: my-app version: 0.1.0

helm/values.yaml

replicaCount: 1 image: repository: <registry>/<user>/my-app digest: sha256:x pullPolicy: IfNotPresent ingress: annotations: certmanager.k8s.io/cluster-issuer: letsencrypt-staging certmanager.k8s.io/acme-challenge-type: http01 hosts: - my-app.com tls: - secretName: my-app-tls issuer: letsencrypt-staging commonName: my-app.com hosts: - my-app.com - www.my-app.com mysql: host: mysql-cluster-mysql-master.mysql.svc.cluster.local port: 3306 database: my-app_development user: my-app password: my-app redis: host: redis-master.redis.svc.cluster.local mail: host: some-smtp-server.com port: "587" tls: "YES" hostname: my-app.com from: [email protected] username: blah password: blah web_concurrency: 2 rails_master_key: blah

helm/templates/deployment-web.yaml

apiVersion: apps/v1 kind: Deployment metadata: name: my-app-web labels: app.kubernetes.io/name: my-app-web helm.sh/chart: my-app app.kubernetes.io/instance: {{ .Release.Name }} app.kubernetes.io/managed-by: {{ .Release.Service }} spec: replicas: {{ .Values.replicaCount }} selector: matchLabels: app.kubernetes.io/name: my-app-web app.kubernetes.io/instance: {{ .Release.Name }} template: metadata: labels: app.kubernetes.io/name: my-app-web app.kubernetes.io/instance: {{ .Release.Name }} spec: containers: - name: {{ .Chart.Name }} image: "{{ .Values.image.repository }}@{{ .Values.image.digest }}" imagePullPolicy: {{ .Values.image.pullPolicy }} ports: - name: http containerPort: 3000 protocol: TCP env: - name: RAILS_MASTER_KEY valueFrom: secretKeyRef: key: rails_master_key name: my-app-secrets - name: SECRET_KEY_BASE valueFrom: secretKeyRef: key: rails_secret_key_base name: my-app-secrets - name: WEB_CONCURRENCY value: {{ .Values.web_concurrency | quote }} - name: REDIS_HOST value: {{ .Values.redis.host }} - name: MYSQL_HOST value: {{ .Values.mysql.host }} - name: MYSQL_PORT value: {{ .Values.mysql.port | quote }} - name: MYSQL_DATABASE value: {{ .Values.mysql.database }} - name: MYSQL_USERNAME valueFrom: secretKeyRef: key: mysql_username name: my-app-secrets - name: MYSQL_PASSWORD valueFrom: secretKeyRef: key: mysql_password name: my-app-secrets - name: MAIL_HOST valueFrom: secretKeyRef: key: mail_host name: my-app-secrets - name: MAIL_PORT valueFrom: secretKeyRef: key: mail_port name: my-app-secrets - name: MAIL_HOSTNAME valueFrom: secretKeyRef: key: mail_hostname name: my-app-secrets - name: MAIL_FROM valueFrom: secretKeyRef: key: mail_from name: my-app-secrets - name: MAIL_USERNAME valueFrom: secretKeyRef: key: mail_username name: my-app-secrets - name: MAIL_PASSWORD valueFrom: secretKeyRef: key: mail_password name: my-app-secrets

helm/templates/deployment-worker.yaml

apiVersion: apps/v1 kind: Deployment metadata: name: my-app-worker labels: app.kubernetes.io/name: my-app-worker helm.sh/chart: my-app app.kubernetes.io/instance: {{ .Release.Name }} app.kubernetes.io/managed-by: {{ .Release.Service }} spec: replicas: {{ .Values.replicaCount }} selector: matchLabels: app.kubernetes.io/name: my-app-worker app.kubernetes.io/instance: {{ .Release.Name }} template: metadata: labels: app.kubernetes.io/name: my-app-worker app.kubernetes.io/instance: {{ .Release.Name }} spec: containers: - name: {{ .Chart.Name }} image: "{{ .Values.image.repository }}@{{ .Values.image.digest }}" imagePullPolicy: {{ .Values.image.pullPolicy }} command: ["bundle", "exec", "sidekiq", "-C", "config/sidekiq.yml"] env: - name: RAILS_MASTER_KEY valueFrom: secretKeyRef: key: rails_master_key name: my-app-secrets - name: SECRET_KEY_BASE valueFrom: secretKeyRef: key: rails_secret_key_base name: my-app-secrets - name: REDIS_HOST value: {{ .Values.redis.host }} - name: MYSQL_HOST value: {{ .Values.mysql.host }} - name: MYSQL_PORT value: {{ .Values.mysql.port | quote }} - name: MYSQL_DATABASE value: {{ .Values.mysql.database }} - name: MYSQL_USERNAME valueFrom: secretKeyRef: key: mysql_username name: my-app-secrets - name: MYSQL_PASSWORD valueFrom: secretKeyRef: key: mysql_password name: my-app-secrets - name: MAIL_HOST valueFrom: secretKeyRef: key: mail_host name: my-app-secrets - name: MAIL_PORT valueFrom: secretKeyRef: key: mail_port name: my-app-secrets - name: MAIL_HOSTNAME valueFrom: secretKeyRef: key: mail_hostname name: my-app-secrets - name: MAIL_FROM valueFrom: secretKeyRef: key: mail_from name: my-app-secrets - name: MAIL_USERNAME valueFrom: secretKeyRef: key: mail_username name: my-app-secrets - name: MAIL_PASSWORD valueFrom: secretKeyRef: key: mail_password name: my-app-secrets

helm/templates/ingress.yaml

apiVersion: extensions/v1beta1 kind: Ingress metadata: name: my-app labels: app.kubernetes.io/name: my-app helm.sh/chart: my-app app.kubernetes.io/instance: {{ .Release.Name }} app.kubernetes.io/managed-by: {{ .Release.Service }} {{- with .Values.ingress.annotations }} annotations: {{- toYaml . | nindent 4 }} {{- end }} spec: tls: {{- range .Values.ingress.tls }} - hosts: {{- range .hosts }} - {{ . | quote }} {{- end }} secretName: {{ .secretName }} {{- end }} rules: {{- range .Values.ingress.hosts }} - host: {{ . | quote }} http: paths: - backend: serviceName: my-app-web servicePort: http {{- end }}

helm/templates/secret.yaml

apiVersion: v1 kind: Secret metadata: name: my-app-secrets type: Opaque data: mysql_username: {{ .Values.mysql.username | b64enc }} mysql_password: {{ .Values.mysql.password | b64enc }} mail_host: {{ .Values.mail.host | b64enc }} mail_port: {{ .Values.mail.port | b64enc }} mail_hostname: {{ .Values.mail.hostname | b64enc }} mail_from: {{ .Values.mail.from | b64enc }} mail_username: {{ .Values.mail.username | b64enc }} mail_password: {{ .Values.mail.password | b64enc }} rails_master_key: {{ .Values.rails_master_key | b64enc }} rails_secret_key_base: {{ .Values.rails_secret_key_base | b64enc }}

helm/templates/service-web.yaml

apiVersion: v1 kind: Service metadata: name: my-app-web labels: app.kubernetes.io/name: my-app-web helm.sh/chart: my-app app.kubernetes.io/instance: {{ .Release.Name }} app.kubernetes.io/managed-by: {{ .Release.Service }} spec: ports: - port: 80 targetPort: 3000 protocol: TCP name: http - port: 443 targetPort: 3000 protocol: TCP name: https selector: app.kubernetes.io/name: my-app-web app.kubernetes.io/instance: {{ .Release.Name }}

helm/helmfiles/00-my-app.yaml

releases: - name: my-app-{{ .Environment.Name }} namespace: my-app-{{ .Environment.Name }} labels: release: my-app-{{ .Environment.Name }} chart: ../chart values: - ../values/my-app/{{ .Environment.Name }}/values.yaml secrets: - ../values/my-app/{{ .Environment.Name }}/secrets.yaml kubeContext: my-app-{{ .Environment.Name }} wait: true tillerNamespace: tiller installed: true environments: dev: prod:

helm/values/my-app/dev/values.yaml

replicaCount: 1 image: repository: <registry>/<username>/<my-app> digest: sha256:x pullPolicy: IfNotPresent ingress: annotations: certmanager.k8s.io/cluster-issuer: letsencrypt-staging certmanager.k8s.io/acme-challenge-type: http01 hosts: - my-app.com - www.my-app.com tls: - secretName: my-app-tls issuer: letsencrypt-staging commonName: my-app.com hosts: - my-app.com - www.my-app.com mysql: host: mysql-cluster-mysql-master.mysql.svc.cluster.local port: 3306 database: my-app_development redis: host: redis-master.redis.svc.cluster.local web_concurrency: 2 rails_master_key: blah

The last file is helm/values/my-app/dev/secrets.yaml. This is an encrypted file which must be edited with the Helm Secrets plugin. For the setup refer to the README on Github, which explain how to configure SOPS/GPG for the encryption. To edit the secrets, you can then run:

helm --tiller-namespace tiller secrets edit helm/values/my-app/dev/secrets.yaml

The contents should include the secrets which should be encrypted:

Once you have the Helm chart ready, you can use Helmfile to deploy to the dev cluster, but first let’s push either the production version of the image to the registry:

docker push <registry>/<username>/my-app

Then to deploy:

helmfile -e dev sync

Now you can test that the deployment is working by hitting one of the relevant hostnames. Of course we haven’t set up nor migrated the database yet, so you’ll see errors.

To create the database, you can first port-forward the MySQL service locally:

kubectl port-forward svc/mysql-cluster-mysql-master 33306:3306 --namespace mysql

This way you can access MySQL locally with:

mysql -uroot -p -h127.0.0.1 -P33306

To create both the development and test databases as well as a user:

CREATE USER 'my-app'@'%' IDENTIFIED BY '<password>'; CREATE DATABASE my-app_development; GRANT ALL PRIVILEGES ON my-app_development.* TO 'my-app'@'%'; CREATE DATABASE my-app_test; GRANT ALL PRIVILEGES ON my-app_test.* TO 'my-app'@'%'; FLUSH PRIVILEGES;

Of course update the app’s database.yml with the correct settings.

Try again the app in the browser and it should work as expected.

Telepresence

As explained earlier I am using Telepresence to swap the Kubernetes deployments with local containers. I need to do this for both the frontend of the app and the Sidekiq worker. First, you need to install Telepresence. On Mac:

brew cask install osxfuse brew install datawire/blackbird/telepresence

Refer to the website for instructions on installing on different platforms.

Assuming you have the deployments running in the cluster, now it’s the fun part of the tutorial. Create a file with the same environment variables used by the Kubernetes deployments, which will be used by the local containers:

~/.secrets/my-app-dev.env

MYSQL_USERNAME=blah MYSQL_PASSWORD=blah MYSQL_HOST=blah REDIS_HOST=blah RAILS_ENV=development MAIL_HOST=blah MAIL_PORT=blah MAIL_HOSTNAME=blah MAIL_FROM=blah MAIL_USERNAME=blah MAIL_PASSWORD=blah RAILS_MASTER_KEY=...

Of course set the correct values. Now create some Docker volumes which will be used by the local containers:

docker volume create my-app_rails_cache docker volume create my-app_bundle docker volume create my-app_node_modules docker volume create my-app_packs

Then, to start Telepresence and swap the Kubernetes deployment for the frontend with a local container, run:

KUBECONFIG=~/.kube/config-<cluster name< telepresence \ --context my-app-dev \ --namespace my-app-dev \ --expose 3000 \ --method container \ --swap-deployment my-app-web \ --docker-run \ --rm --name my-app-web \ -p 3000:3000 \ --env-file ~/.secrets/my-app-dev.env \ -v ${PWD}:/app:cached \ -v my-app_rails_cache:/app/tmp/cache \ -v my-app_bundle:/bundle \ -v my-app_node_modules:/app/node_modules \ -v my-app_packs:/app/public/packs \ <registry>/<username>/my-app-dev

The command above will have Telepresence start a container for us and swap the frontend deployment so that all the requests are proxied to our local container. Similarly, for the Sidekiq worker:

KUBECONFIG=~/.kube/config-<cluster name> telepresence \ --context my-app-dev \ --namespace my-app-dev \ --method container \ --swap-deployment my-app-worker \ --docker-run \ --rm --name my-app-worker \ --env-file ~/.secrets/my-app-dev.env \ -v ${PWD}:/app \ -v my-app_rails_cache:/app/tmp/cache \ -v my-app_bundle:/bundle \ -v my-app_node_modules:/app/node_modules \ -v my-app_packs:/app/public/packs \ <registry>/<username>/my-app-dev \ bundle exec sidekiq -C config/sidekiq.yml

If you now run kubectl -n my-app-dev get pods you’ll see that Telepresence has created new pods for the tunnels/proxies which will forward the requests to your local containers bypassing the Kubernetes deployments. Open the app in the browser and it should still work. Then, make a change to the code of some page and check it out in the browser, the change should take effect immediately! IMO this is really cool. It’s easy and with a much faster feedback than the continuous deployment with Garden or Skaffold.

Makefile for most frequent tasks

Since there are some verbose commands that I need to run often, I also create a Makefile for the most frequent tasks:

NAME := <registry>/<username>/my-app-dev TARGET := development CURRENT_VERSION := $(shell cat current_version) ifeq ($(ENV),production) NAME := <registry>/<username>/my-app TARGET := production endif ARGS = `arg="$(filter-out [email protected],$(MAKECMDGOALS))" && echo $${arg:-${1}}` %: @: build: $(eval NEW_VERSION=$(shell echo $$(($(CURRENT_VERSION)+1)))) @docker build --target ${TARGET} -t ${NAME} . @docker tag ${NAME} ${NAME}:${NEW_VERSION} @echo ${NEW_VERSION} > current_version push: @CURRENT_VERSION=`cat current_version` @docker push <registry>/<username>/my-app:latest @docker push <registry>/<username>/my-app:${CURRENT_VERSION} deploy: @helmfile -e dev sync deploy-prod: @helmfile -e prod sync bootstrap: @docker run --rm --name my-app-web \ --env-file ~/.secrets/my-app-dev.env \ -v ${PWD}:/app:cached \ -v my-app_rails_cache:/app/tmp/cache \ -v my-app_bundle:/bundle \ -v my-app_node_modules:/app/node_modules \ -v my-app_packs:/app/public/packs \ -it <registry>/<username>/my-app-dev \ ash -c "bundle install && yarn install" web: @KUBECONFIG=~/.kube/config-<cluster name> telepresence \ --context my-app-dev \ --namespace my-app-dev \ --expose 3000 \ --method container \ --swap-deployment my-app-web \ --docker-run \ --rm --name my-app-web \ -p 3000:3000 \ --env-file ~/.secrets/my-app-dev.env \ -v ${PWD}:/app:cached \ -v my-app_rails_cache:/app/tmp/cache \ -v my-app_bundle:/bundle \ -v my-app_node_modules:/app/node_modules \ -v my-app_packs:/app/public/packs \ <registry>/<username>/my-app-dev worker: @KUBECONFIG=~/.kube/config-<cluster name> telepresence \ --context my-app-dev \ --namespace my-app-dev \ --method container \ --swap-deployment my-app-worker \ --docker-run \ --rm --name my-app-worker \ --env-file ~/.secrets/my-app-dev.env \ -v ${PWD}:/app \ -v my-app_rails_cache:/app/tmp/cache \ -v my-app_bundle:/bundle \ -v my-app_node_modules:/app/node_modules \ -v my-app_packs:/app/public/packs \ <registry>/<username>/my-app-dev \ bundle exec sidekiq -C config/sidekiq.yml shell: @docker exec -it my-app-web ash rails: @docker exec -it my-app-web bundle exec rails $(call ARGS) test:: @docker exec -it my-app-web bundle exec rails test $(call ARGS) RAILS_ENV=test test-system: @docker exec -it my-app-web bundle exec rails test:system RAILS_ENV=test create-volumes: @docker volume create my-app_rails_cache @docker volume create my-app_bundle @docker volume create my-app_node_modules @docker volume create my-app_packs delete-volumes: @docker volume rm my-app_rails_cache @docker volume rm my-app_bundle @docker volume rm my-app_node_modules @docker volume rm my-app_packs

I won’t go into details on how the Makefile works since it should be self explanatory. The commands available, as you can see, are:

make build

This builds the image for development by default (my-app-dev) or, if the environment variable ENV is set to production, for production. It also adds a tag to the image which is just an incrementing number stored in the current_version file (which, btw, you should create with 0 as the content when starting).

make push

This command pushes the image to the repository, always/only for production.

make deploy

Deploys to the development cluster.

make deploy-prod

Deploys to the production cluster.

make bootstrap

I use this command when I rebuild the development image and want to update the Docker volumes mounted into the development containers.

make web

Tells Telepresence to start a local container for the frontend and to swap the related Kubernetes deployment with it.

make worker

Same thing as above but for the worker.

make shell

Opens a shell (ash) in a running web container.

make rails ...

Runs a rails command in a running web container.

make test / make test-system

Runs tests using a running web container.

create volumes / delete volumes

Creates/deletes the volumes used by the local containers.

Conclusion

I hope I remembered to include everything relevant for this tutorial. Like I said I am very happy with this setup and am happy to share. It’s a little bit of work upfront, but it’s something that needs to be done only once anyway. Then, I just start the web and worker containers locally with Telepresence and get on with development as usual! Thanks to this setup and workflow, I don’t miss anything from the old way of doing Rails development, even though there’s of course some additional complexity in comparison. Please feel free to let me know in the comments if you run into issues while replicating this setup and I will be happy to help if I can. Any feedback is also most welcome.