illustration of Prisma’s topology

Disclaimer: There will be a general assumption that you are aware of what Prisma is, what it does and its benefits. To learn more about Prisma check out their excellent website www.prisma.io

Prisma provides a very generous development environment called Prisma Cloud for free, but you may not want to or be able to utilize this environment for your production application. So we’re going to walk through how to set up a highly available deployment of Prisma to be used within your applications.

Containerisation

Encapsulating your application within a container allows you to separate concerns, scale independently and do so in a repeatable, self-contained & reliable way.

Container orchestration can be achieved in a number of ways, but we’ll be focusing on using Kubernetes as it’s supported by both AWS, Azure and Google Cloud, provides an idiomatic approach to managing your deployments and scaling those deployments.

Prisma utilizes docker which can make it incredibly versatile for deployment locally or within a remote docker environment.

Data Storage

Prisma wraps our database with static typing and a GraphQL server which automatically generates bindings for our modeled data, so ensuring we have a dependable and scalable database is key to a production instance of Prisma. Prisma currently supports Postgres and MySQL with a MongoDB in beta.

We’ll be using Kubernetes and Google Cloud for our infrastructure so we’ll take advantage of Google’s cloud SQL product.

If you wish to deploy your own database, you should look to use a clustered deployment of MySQL, which has a master/slave relationship and replication as a minimum.

Defining our infrastructure

We’ll be using Kubernetes which utilizes manifests to describe it’s resources, one of which is a deployment, this will represent our application and define environment variables that can be accessed by our application; which is represented as a pod(s) within Kubenetes.

We will also describe how our pod should be treated in the event of a failure (self-healing) and how it should scale.

apiVersion: extensions/v1beta1

kind: Deployment

metadata:

name: prisma

namespace: prisma

labels:

stage: production

name: prisma

app: prisma

spec:

replicas: 2

strategy:

type: RollingUpdate

rollingUpdate:

maxSurge: 1

maxUnavailable: 0

template:

metadata:

labels:

stage: production

name: prisma

app: prisma

spec:

containers:

- name: prisma

image: 'prismagraphql/prisma:1.15'

ports:

- name: prisma-4466

containerPort: 4466

env:

- name: PRISMA_CONFIG

valueFrom:

configMapKeyRef:

name: prisma-configmap

key: PRISMA_CONFIG

- name: DB_USER

valueFrom:

secretKeyRef:

name: prisma-cloudsql-db-credentials

key: username

- name: DB_PASSWORD

valueFrom:

secretKeyRef:

name: prisma-cloudsql-db-credentials

key: password

- name: cloudsql-proxy

image: gcr.io/cloudsql-docker/gce-proxy:1.11

command: ["/cloud_sql_proxy",

"-instances=mysql-212618:europe-west1:production-prisma=tcp:3306",

"-credential_file=/secrets/cloudsql/credentials.json"]

securityContext:

runAsUser: 2

allowPrivilegeEscalation: false

volumeMounts:

- name: prisma-cloudsql-instance-credentials

mountPath: /secrets/cloudsql

readOnly: true

volumes:

- name: prisma-cloudsql-instance-credentials

secret:

secretName: prisma-cloudsql-instance-credentials

Some details within our deployment manifest

replicas: 2

strategy:

type: RollingUpdate

rollingUpdate:

maxSurge: 1

maxUnavailable: 0

Scaling is separated into two distinct areas: deployment strategy and resource availability.

Our deployment strategy declares how we wish for new images to be released to our containers. The default is to recreate by terminating the old and releasing a new version; typically this results in downtime.

Opting for a rolling update allows us to deploy changes whilst maintaining uptime, we can also pause a rollout and resume or cancel it.

More information on the different strategies can be found here: https://container-solutions.com/kubernetes-deployment-strategies

Availability can be defined as the maximum number of pods capable of meeting the demand of the application. We’ve declared 2 replicas but only 1 can be created in addition to this value, giving us a maximum of 3 pods active at any one time during a rollout.

env:

- name: PRISMA_CONFIG

valueFrom:

configMapKeyRef:

name: prisma-configmap

key: PRISMA_CONFIG

We're referencing a config map called PRISMA_CONFIG which is expected by Prisma in order to connect to our database and define details about our host.

You can read more about this here: https://www.prisma.io/docs/run-prisma-server/deployment-environments/docker-rty1/#prisma_config-reference

We can define our config map as a resource within Kubernetes:

apiVersion: v1

kind: ConfigMap

metadata:

name: prisma-configmap

namespace: prisma

labels:

stage: production

name: prisma

app: prisma

data:

PRISMA_CONFIG: |

port: 4466

databases:

default:

connector: mysql

host: 127.0.0.1

port: 3306

user: prisma

password: password123 # make this a lot more complex and secure

migrations: true

Volume Mounting & Google Cloud Proxy

We need to allow our application to securely communicate to our Google Cloud SQL database, one way we can do that is by adding a proxy sidecar to our application deployment.

This will create a secure tunnel between the SQL service within Google Cloud and our application within the Kubernetes container.

apiVersion: extensions/v1beta1

kind: Deployment

...

spec:

template:

spec:

containers:

- name: cloudsql-proxy

image: gcr.io/cloudsql-docker/gce-proxy:1.11

command: ["/cloud_sql_proxy",

"-instances=mysql-212618:europe-west1:production-prisma=tcp:3306",

"-credential_file=/secrets/cloudsql/credentials.json"]

securityContext:

runAsUser: 2

allowPrivilegeEscalation: false

volumeMounts:

- name: prisma-cloudsql-instance-credentials

mountPath: /secrets/cloudsql

readOnly: true

volumes:

- name: prisma-cloudsql-instance-credentials

secret:

secretName: prisma-cloudsql-instance-credentials

We’re also mounting a secret which represents our Google Cloud IAM credentials so we can reference it through the credentials_file flag within the command block.

./cloud_sql_proxy -instances=<INSTANCE_CONNECTION_NAME>=tcp:3306 \

-credential_file=<PATH_TO_KEY_FILE> &

https://cloud.google.com/sql/docs/mysql/connect-admin-proxy#start-proxy

Managing Secrets

Maintaining excellent secret hygiene is crucial to a healthy production environment. Kubernetes provides a mechanism to store secrets securely, however, if you are more paranoid about storing secrets and require the ability to audit access to secrets, exploring a Hashi Corp Vault may be a good idea and out of the scope of this exercise.

We’ve referenced a secret prisma-cloudql-instance-credentials again we can create secrets as a resource within Kubernetes.

kubectl create secret generic cloudsql-instance-credentials \

--from-file=credentials.json=[PROXY_KEY_FILE_PATH]

https://cloud.google.com/sql/docs/mysql/connect-kubernetes-engine#5_create_your_secrets

If you don’t have a credentials.json file which holds an IAM account for access to your database you can follow these instructions here: https://cloud.google.com/sql/docs/mysql/connect-kubernetes-engine#2_create_a_service_account

Accessing Prisma

You should now have a single cluster with 2 pods that have Prisma deployed across them, both connecting to your Google Cloud SQL database.

However, you can’t access Prisma until you expose the application deployment as a service. More on services here: https://kubernetes.io/docs/concepts/services-networking/service

Services are a Kubernetes resource and can be declared in a similar fashion to the rest of our application.

apiVersion: v1

kind: Service

metadata:

name: prisma-backend

namespace: prisma

labels:

stage: production

name: prisma

app: prisma

spec:

type: NodePort

selector:

stage: production

name: prisma

app: prisma

ports:

- port: 4466

targetPort: 4466

We’re choosing to reference and group our pods together using selectors, selectors allow us to abstract away any discovery mechanism and ensures we can add additional pods to our pool without changing our overall definition of our application infrastructure.

spec:

type: NodePort

selector:

stage: production

name: prisma

app: prisma

You should now be able to see your new Prisma service by running

$ kubectl describe services --namespace prisma

We’ve declared within all our manifests that any resources we create should reside within the prisma namespace, important to remember to prefix any commands with this namespace.

You can also proxy requests to your new service too.

$ kubectl port-forward --namespace prisma <the-pod-name> 4467:4466

https://www.prisma.io/docs/1.14/tutorials/deploy-prisma-servers/kubernetes-aiqu8ahgha#configuration-of-the-prisma-cli

You should be able to access your Prisma instance locally http://localhost:4466

This is a great way to test your pods too since you specify which pod to proxy.

At this point, you will also be able to use Kubernetes internal DNS and service discovery to send requests to your Prisma service. Which is recommended if you do not wish to publically expose it, if you do, you should ensure to provide a secure secret for the managementApiSecret value within your PRISMA_CONFIG and also add TLS termination.

Exposing publically

If you HAVE to expose your Prisma instance externally you will need to declare an Ingress resource which refers to our service and forwards requests via its NodePort to a publicly accessible address.

As we’re using Google Cloud we can register a static IP address which ensures any changes to our Ingress controller’s networking interface won’t result in us losing access to the Prisma service.

$ gcloud compute addresses create <name-of-ip> --global

We can now use our newly registered and assigned IP address to create an A record within DNS.

apiVersion: extensions/v1beta1

kind: Ingress

metadata:

name: basic-ingress

annotations:

kubernetes.io/ingress.global-static-ip-name: <name-of-ip>

spec:

backend:

serviceName: web

servicePort: 8080

Summary

We’ve deployed a cluster of Prisma servers, backed by Google Cloud’s SQL platform and then exposed that as an internal service within Kubenetes.

You should also now know how to access specific pods within your cluster using the Google Cloud Proxy.

Finally, we wanted to access our cluster externally so we have setup a global static IP address and assigned it to an Ingress controller which uses our previously defined service to back it.

Next time we’ll examine how to make our servers more secure using Hashicorp Vault and adding TLS/SSL with LetsEncrypt.