If you get an authentication or permissions error, try running gcloud auth login and allow the permissions requested.

Try running kubectl cluster-info to check on the status of your cluster. We don’t need these endpoints, but if this command did not execute successfully, you’ll want to fix the issue before continuing.

Our production mysql container will have high demands of the node that is scheduled on. We don’t want it to have to share resources with web containers. If a web container was very busy serving web requests, it might prevent our database container from executing queries and performing well. Since the database is often a bottleneck and we will only run one mysql container, this next step describes how to setup a pool of nodes in GKE that only our database container is allowed to use.

gcloud container node-pools create db-pool \

--machine-type=n1-highmem-2 \

--num-nodes=1 \

--cluster=rails

The above command will create a pool of nodes named db-pool in GKE. That pool will contain one node (VM) and the pool will exist within our rails cluster, which we’re using for the rest of our deployment.

When the pool creation is done, you should see something like



Created [

NAME MACHINE_TYPE DISK_SIZE_GB NODE_VERSION

db-pool n1-highmem-2 100 1.4.5 Creating node pool db-pool...done.Created [ https://container.googleapis.com/ ....].NAME MACHINE_TYPE DISK_SIZE_GB NODE_VERSIONdb-pool n1-highmem-2 100 1.4.5

Deploying MySQL

Now that our cluster is running, we can deploy some containers.

CAVEAT: running mysql as a container, even with an orchestration tool, may not be a production-quality solution for you or your business. Many people advise against deploying your database as a container, but for the purposes of this tutorial (and perhaps that it meets your needs), I will instruct you how to do so.

In preparation for some files we’ll be creating, make a kube directory in your project root. We’ll store our kubernetes config objects there for reference.

We’ll start by deploying the mysql container. The mysql container will use the value of an ENV variable called MYSQL_ROOT_PASSWORD when mysql initializes for the first time. Because we want this password to be secret and it needs to be set for our container, but not stored in our container, we’ll create a kubernetes secret object for the username and password that we’ll use for the mysql root account and we’ll allow some containers access to this secret.

Creating Secrets

Secrets can be exposed to a container via a data volume or environment variables. For our example, we’ll want these secrets as environment variables so we can use them in our application config.

First, encode your username and password as base64 and save this for later:

% echo -n 'root' | base64

cm9vdA==

% echo -n 'my secure password' | base64

bXkgc2VjdXJlIHBhc3N3b3Jk

% docker-compose run --rm app rake secret | base64

YzcwZTYyODdhYmIzNjE1NzI4MjkwYTA1ZjNmZDlkM2NlYWU2NzIzNGY0ZDFlYTc2OTQyODFkOTM0MTczNWEwYzA0NWEzYjAxZGVkMDEyYjBhODQxNzhmNDAxMjY2OTA2ZDJjMDM2ZTY3MWQ1MzZkNDZhZDhiZGVlOGEwYjQ5ODY=

We’ll then create a file named app-secrets.yml in the ./kube folder and the contents should be the like the following:

# ./kube/app-secrets.yml apiVersion: v1

data:

mysql_user: cm9vdA==

mysql_password: bXkgc2VjdXJlIHBhc3N3b3Jk

secret_key_base: YzcwZTYyODdhYmIzNjE1NzI4......=

kind: Secret

type: Opaque

metadata:

name: app-secrets

ProTip™: Use your own unique secret_key_base .

Use kubectl create -f kube/app-secrets.yml to create the secret object in kubernetes master. After this is successful, you can delete this file. (If you commit this file, you will be exposing your credentials to whoever has access to your source control. Base64 is not encryption). You can edit the file later with kubectl edit secret/app-secrets . Your EDITOR will open and you can edit the details and send the updates straight to kubernetes master in the cluster.

You should see secret “app-secrets” created . Kubernetes master now holds this object and can make this secret available as ENV variables to any container we specify.

Creating a persistent disk

In number 4 of our production requirements we stated that we wanted our database data to live independently so that it doesn’t disappear when a container is destroyed or moved. To accomplish this, we’ll create a persistent disk in Google Compute to hold our mysql data. The persistent disk outlives any clusters, nodes, and containers. Persistent disks can also be resized later. Use the following command to create a persistent disk with the name db-data and 300GB of storage in GCE (you can resize this later):

gcloud compute disks create --size 300GB --type pd-ssd db-data

We’ll use this disk later as a volume for our mysql container.

Creating the MySQL deployment

Now we’re finally ready create a Deployment for our mysql container. A deployment is a combination of a couple of things in Kubernetes. It’s a replication controller, which controls the perpetual existence of a set number of pods. Pods are an atomic group of containers (usually 1) that travel (need to exist) together. Our web pod (web server containers) will be separate from our mysql pod (mysql container). The replication controller will ensure n number of our pods are running at all times on whatever node they fit on. A deployment also includes the setup for rollouts. A rollout is when a new container image is specified for existing pods and the containers running the old image need to be replaced with the new ones in a rolling fashion. Kubernetes will handle a zero-downtime rollout of the new container image to replace all existing, older-image containers. For instance, you could deploy a minor version update to mysql via a rollout, ensuring that your database is running at all times and a newer version replaces the existing container running mysql.

The Kubernetes spec file for our mysql deployment looks like this:

# ./kube/mysql-deployment.yml apiVersion: extensions/v1beta1

kind: Deployment

metadata:

name: mysql

labels:

name: mysql

spec:

replicas: 1

template:

metadata:

labels:

name: mysql

spec:

nodeSelector:

cloud.google.com/gke-nodepool: db-pool

containers:

- image: mysql:5.6

name: mysql

resources:

requests:

cpu: 800m

limits:

cpu: 800m

env:

- name: MYSQL_DATABASE

value: app_production

- name: MYSQL_ROOT_USER

valueFrom:

secretKeyRef:

name: app-secrets

key: mysql_user

- name: MYSQL_ROOT_PASSWORD

valueFrom:

secretKeyRef:

name: app-secrets

key: mysql_password

ports:

- containerPort: 3306

name: mysql

volumeMounts:

# This name must match the volumes.name below.

- name: mysql-db-data

mountPath: "/var/lib/mysql"

volumes:

- name: mysql-db-data

gcePersistentDisk:

# This disk must already exist.

pdName: db-data

fsType: ext4

replicas: 1 means we want 1 pod running at all times. containers: is an array of containers that should be run for this pod. There is only one and it’s image is the canonical mysql:5.6 from the docker registry. It requests 80% of one CPU core (15% goes to kube-system pods. For ENV variables, we specify three values: MYSQL_DATABASE (the value for which is set explicitly), MYSQL_ROOT_USER , and MYSQL_ROOT_PASSWORD (the values for which are gathered from a kubernetes secret we created earlier named app-secrets . The container exposes port 3306, on which mysql runs, so we must indicate this to kubernetes. Finally, this pod requires a volume named mysql-db-data mounted at /var/lib/mysql , the definition for which is at the bottom where we indicate that a gcePersistentDisk can be found with the name db-data and should be treated as an ext4 filesystem.

The volumes definition is very important here. The db-data disk must be attached to the VM node that our mysql container will run on—and because kubernetes will ultimately decide which node in the pool the database will run on, we don’t know which node to attach the disk to. This is okay, because kubernetes will automatically attach that persistent disk to the node it has decided to schedule the mysql pod on before it runs the mysql pod itself. If our mysql pod gets scheduled or recreated on a different node, the disk will be attached before it arrives. Isn’t that nice?

Save this YAML to a file in your ./kube folder named mysql-deployment.yml . You can now create the mysql deployment with kubectl create -f kube/mysql-deployment.yml .

Run kubectl get pods and you should see your pod running after a minute or so:

NAME READY STATUS RESTARTS AGE

mysql-2390497038-gysw7 1/1 Running 0 59s

Creating the MySQL service

We’re almost done with mysql. Now we need to expose mysql to the rest of the cluster so other containers can easily communicate with the mysql pod. This is accomplished with a kubernetes Service. The spec is fairly simple:

# ./kube/mysql-service.yml apiVersion: v1

kind: Service

metadata:

name: mysql

labels:

name: mysql

spec:

ports:

- port: 3306

selector:

name: mysql

This defines a service in our cluster named mysql which Kubernetes will make available to all pods in our cluster by resolving DNS lookups for mysql to any pods matching the selector name=mysql and send traffic to the container on port 3306.

Use kubectl create -f kube/mysql-service.yml to create this service. It should now be returned when you run kubectl get services :

NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE

kubernetes 10.3.240.1 <none> 443/TCP 1h

We’re done with mysql!

ProTip™: if you’d like to connect to your mysql pod from your local machine, try kubectl port-forward [pod name] 3306:3306 and connect to localhost:3306 with a mysql client. The pod name will be available via kubectl get pods . Wow!

Now that the mysql pod is running and we know we can reach it via mysql:3306 . We can update our rails application’s database.yml to look like this:

# config/database.yml default: &default

adapter: mysql2

encoding: utf8

pool: 5

username: root

password: root

host: mysql production:

<<: *default

database: app_production

username: <%= ENV['MYSQL_USER'] %>

password: <%= ENV['MYSQL_PASSWORD'] %>

Where MYSQL_USER and MYSQL_PASSWORD are ENV vars we’ve made available in the bottom of our database deployment spec. The hostname mysql will resolve to the mysql pod. app_production is the database that was automatically created when mysql started for the first time because we set a MYSQL_DATABASE ENV variable on that container in our mysql deployment spec. If you skipped this step, run rake db:setup in one of your web pods after setting them up in the next section.

Deploying the application

To deploy pods of our application container for serving web requests, we’ll need a container image to run, a deployment, a load balancer that exposes our web pods to the public internet, and an automatic autoscaler to increase or decrease the number of web pods running to appropriately handle the incoming web traffic for good measure. The web container will be running phusion passenger / nginx, which we set up in Part I of this guide. This container will run our app and handle web requests. Our web pod will consist of just the one container—our phusion passenger application image.

Before we can deploy our application container, we must build it and upload it to a docker registry so that our GKE nodes can easily download it and run it.

Let’s build our docker container from the application root. The example application I’m using can be found here. As outlined in Part I, we’re running the phusion-passenger-ruby23:0.9.19 image and our Dockerfile has been written such that when it’s done building the container, the container will be ready to run in production. This includes bundle install and rake assets:precompile as those are both necessary before running the application.

# ./Dockerfile FROM phusion/passenger-ruby23 # set some rails env vars

ENV RAILS_ENV production

ENV BUNDLE_PATH /bundle # set the app directory var

ENV APP_HOME /home/app

WORKDIR $APP_HOME # Enable nginx/passenger

RUN rm -f /etc/service/nginx/down

# Some discussion on this:

RUN rm -rf /etc/service/sshd /etc/my_init.d/00_regen_ssh_host_keys.sh # Disable SSH# Some discussion on this: https://news.ycombinator.com/item?id=7950326 RUN rm -rf /etc/service/sshd /etc/my_init.d/00_regen_ssh_host_keys.sh RUN apt-get update -qq # Install apt dependencies

RUN apt-get install -y --no-install-recommends \

build-essential \

curl libssl-dev \

git \

unzip \

zlib1g-dev \

libxslt-dev \

mysql-client \

sqlite3 # install bundler

RUN gem install bundler # Separate task from `add . .` as it will be

# Skipped if gemfile.lock hasn't changed

COPY Gemfile* ./ # Install gems to /bundle

RUN bundle install # place the nginx / passenger config

RUN rm /etc/nginx/sites-enabled/default

ADD nginx/env.conf /etc/nginx/main.d/env.conf

ADD nginx/app.conf /etc/nginx/sites-enabled/app.conf ADD . . # compile assets!

RUN bundle exec rake assets:precompile EXPOSE 3000 CMD ["/sbin/my_init"]

This Dockerfile references a couple of configs in ./nginx in our project root directory. These are the nginx configs for the web server. You can read more about what they contain in the Phusion Passenger Docker Image Docs. Example config files that are required fro this tutorial can be found here. Place them in a folder named nginx in the project root.

If you’re using a different dockerfile, your application should respond to a GET /_health HTTP request with a 200 status code if everything is okay for your app and be listening on 3000 (you can change 3000 everywhere you see it to your specific port if you need to). We will use this endpoint for the kubernetes health check.

Run docker build -t app . where app is the name you’d like to use for your container. I’ll continue to use app for the remainder of this tutorial. It may be helpful to be more specific with the name.

Pushing the container to the registry

Since we’ll upload this container to the private google cloud container registry that comes with our google cloud account, we need to tag the image with our project name and container name. The format is as follows: us.gcr.io/$PROJECT_ID/$CONTAINER_NAME:$TAG . My project ID is rails-kube-demo and my container name is app , so I will run docker tag app us.gcr.io/rails-kube-demo/app:v1 to tag an existing container named app that I’ve built. You could also use the git commit SHA as the tag, but v1 will suffice for now.

When your container is done building, you can push it to the google cloud registry with gcloud docker push us.gcr.io/rails-kube-demo/app:v1 where us.gcr.io/rails-kube-demo/app:v1 is what we just tagged our container with. You should see something like this:

The push refers to a repository [us.gcr.io/rails-kube-demo/app]

1a3012e3f756: Pushed

3372d3e3f26a: Pushed

10b93f2e7983: Pushed

d2015d8207ae: Pushed

044d2c25c0a4: Pushed

6583778e1b31: Pushed

5d90245e8929: Pushed

b567022f1893: Pushed

fadbdb7f7da6: Pushing [==========> ] 47.72 MB/79.08 MB

dd5bad579675: Pushing [=====> ] 19.77 MB/39.29 MB

02471478283d: Pushed

55bbe78d1c49: Pushed

28ba3922517b: Pushing [====> ] 64.81 MB/429.4 MB

872e268735cb: Pushed

5f70bf18a086: Pushed

0184e31d4eba: Pushing [====> ] 28.3 MB/92.83 MB

19a8383d6948: Pushed

0738910e0455: Pushed

21df36b5c775: Pushed

315fe8388056: Pushed

7f4734de8e3d: Pushing [====> ] 10.24 MB/124.1 MB

When it’s done, your container is now available for your kubernetes nodes to download and run.

Creating the web deployment

The web deployment is the pod specification for our rails app web servers. This works much like the mysql deployment, but we’ll want more than one web server running. This pod will also need the app-secrets secret, which you can see near the bottom of the spec.

Our web deployment spec will look like this:

# ./kube/web-deployment.yml apiVersion: extensions/v1beta1

kind: Deployment

metadata:

name: web

labels:

name: web

spec:

replicas: 2

template:

metadata:

labels:

name: web

spec:

nodeSelector:

cloud.google.com/gke-nodepool: default-pool

containers:

- name: web

image: us.gcr.io/rails-kube-demo/app:v1

ports:

- containerPort: 3000

livenessProbe:

httpGet:

path: /_health

port: 3000

initialDelaySeconds: 30

timeoutSeconds: 1

readinessProbe:

httpGet:

path: /_health

port: 3000

initialDelaySeconds: 30

timeoutSeconds: 1

env:

- name: SECRET_KEY_BASE

valueFrom:

secretKeyRef:

name: app-secrets

key: secret_key_base

- name: MYSQL_USER

valueFrom:

secretKeyRef:

name: app-secrets

key: mysql_user

- name: MYSQL_PASSWORD

valueFrom:

secretKeyRef:

name: app-secrets

key: mysql_password

Our label for this container is web , we want 2 replicas of our application container, our container will expose port 3000 (where nginx is listening inside the container), we want kubernetes to request the /_health to check if this container is alive (liveness), we want it to hit the same endpoint to see if the container is ready to be added to the load balancer (readiness), and the app-secrets should be exposed to this container as ENV variables so it can connect to mysql and use the SECRET_KEY_BASE . In order for kubernetes to know that your application is alive and your web server is listening, the application needs to respond to a GET /_health HTTP request with a 200 status code if everything is okay. Add this endpoint to your app. We’ve used the nodeSelector option to indicate that this pod has an affinity for the default-pool node pool (so it doesn’t get scheduled on our db-pool nodes which are reserved for our mysql pod). Save this file as ./kube/web-deployment.yml and create the objects with kubectl create -f kube/web-deployment.yml . Check kubectl get pods and use kubectl describe pod [pod name] to check the status of that pod deployment. If there’s an issue, describe pod will show it at the bottom and you can use kubectl apply -f kube/web-deployment.yml to apply changes that you’ve made to your spec.

Troubleshooting