If you find this useful, please consider sharing it on social media to help spread the word!

Introduction

I will explain in this article my setup for Dockerizing a Gatsby project, and a full-stack web app using NextJS, Semantic-UI, Apollo/GraphQL, ExpressJS, MongooseJS/MongoDB, and NGINX.

We will stand up the default Gatsby starter project, and then a quick and dirty no-frills issue tracker app with NextJS.

There is more than one way to peel this potato, this article will cover the configurations and methodologies that have worked well for me. If you have some tricks of your own I’d love to hear them in the comments down below!

I worked in Windows Subsystem for Linux (WSL) exclusively for a long time, but I eventually fully embraced the Windows environment for development due to better integration with VSCodium and some other tools, I haven’t looked back. WSL is great and I still use it when I need a bash shell for command-line tools like SSH, but now even PowerShell has SSH, on the other hand, WSL2 is coming down the pike and looks to be pretty exciting, so who knows what the future holds.

In order to follow along with this tutorial, you will need a Windows 10 machine with the following software installed.

Required Software

Docker Desktop

NodeJS (NPM comes bundled in the Windows installer)

Recommended Software

Go (I don't get too heavy into Go, it's just used for some Docker healthchecks)

VSCodium (or Visual Studio Code) with the following extensions: Docker (Microsoft) Docker Explorer (Jun Han) Go (Microsoft) NGINX Configuration (William Voyek) nginx-formatter (Simon Schneider) nginx.conf hint (Liu Yue) npm (egamma) npm intellisense (Christian Kohler) Prettier - Code formatter (Esben Petersen) vscode-icons (VSCode Icons Team)



Notes

Keep any environment variables that are required at the time the image is built in Dockerfile and Dockerfile.dev , and any others (especially secrets or sensitive information) in docker-compose.yml and docker-compose.dev.yml . If there is sensitive information in any Docker configuration file, don’t forget to add it to .gitignore and .dockerignore to keep it private! Sometimes npm run start:dev will error out on the first run of a new project, I believe this is an issue with filesystem permissions taking too long to finish allowing the new directory while the image is being built, running npm run build:dev to rebuild the image will clear up the issue. At the time of writing, Docker Desktop for Windows version 2.2.0.0 has a nested mounting bug that breaks these projects There is an open issue on GitHub.

In the meantime, you can download Docker Desktop for Windows version 2.1.0.5 from here.

Gatsby

I’ll start with Gatsby since it has less moving parts making for an easier setup. For reference, here is a GitHub repository of the finished project:

Initialize

Hold the shift key and right-click on your desktop, choose “Open PowerShell window here”, then install gatsby-cli globally, create a new project.

npm i - g gatsby - cli gatsby new cloud - native - gatsby exit

Right-click on the “cloud-native-gatsby” folder on your desktop, and choose “Open with VSCodium” (or VSCode), and we’re ready to go.

Create a new folder called docker in the root of the project, we’ll be working mostly in here, also add two more folders inside the docker folder called healthcheck and nginx .

Prettier

I use Prettier to keep my code uniform and clean (or at least the formatting of it anyway). Since we’re Dockerizing, we’ll need to install Prettier globally because the package won’t be available on the machine you’re running VSCodium on.

npm i -g prettier

Press ctrl + , to open VSCodium’s settings and type prettier path into the search box, and change the value to the full path of the global installation, be sure to change <USERNAME> with your username on your workstation:

C: \ Users \ < USERNAME > \ AppData \ Roaming \ npm \ node_modules \ prettier

Now we can add a configuration file in the root of the project, here are the settings I like to use:

.prettierrc

{ "semi" : false , "trailingComma" : "all" , "singleQuote" : true , "printWidth" : 70 , "arrowParens" : "always" , "bracketSpacing" : true , "endOfLine" : "lf" , "htmlWhitespaceSensitivity" : "strict" , "insertPragma" : false , "jsxBracketSameLine" : false , "jsxSingleQuote" : true , "proseWrap" : "preserve" , "quoteProps" : "as-needed" , "requirePragma" : false , "tabWidth" : 2 , "useTabs" : false , "vueIndentScriptAndStyle" : false }

Docker Ignore

Copy the default .gitignore file and name it .dockerignore , we want to ignore all of the same files.

Healthcheck

Here’s a super simple healthcheck that takes port from the environment variable HEALTHCHECK_PORT , and makes sure that port responds with code 200 which means everything is OK.

docker/go/healthcheck.go

package main import ( "fmt" "net/http" "os" ) func main ( ) { response , err := http . Get ( fmt . Sprintf ( "http://127.0.0.1:%s" , os . Getenv ( "HEALTHCHECK_PORT" ) ) ) if err != nil { os . Exit ( 1 ) } if response . StatusCode != 200 { os . Exit ( 1 ) } os . Exit ( 0 ) }

NGINX

In the main NGINX configuration file nginx.conf , we’ll remove the user declaration so we can run it as a non-root user, change worker_processes to 1, and change the access_log to stdout so they’ll show up in Docker logs. I haven’t enabled gzip here, because my production web apps sit behind an NGINX reverse proxy with Brotli enabled, a more efficient compression algorithm, and Brotli won’t re-compress a gzipped file.

docker/nginx/nginx.conf

worker_processes 1 ; error_log / var / log / nginx / error . log warn ; pid / tmp / nginx . pid ; events { worker_connections 1024 ; } http { include / etc / nginx / mime . types ; default_type application / octet - stream ; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"' ; access_log / dev / stdout main ; sendfile on ; keepalive_timeout 65 ; include / etc / nginx / conf . d / * . conf ; }

In the site configuration file default.conf , we’ll set the port to 8000 so we can run NGINX as a non-root user, tell NGINX how to serve the generated static files from Gatsby, and ensure that the correct Cache-Control headers are set up for each resource to maintain the speed that Gatsby offers out of the box.

docker/nginx/default.conf

server { listen 8000 default_server ; charset utf - 8 ; access_log / dev / stdout ; root / usr / share / nginx / html ; index index . html index . htm ; location / { try_files $uri $uri / index . html $uri / index . htm $uri / = 404 ; } location = / sw . js { add_header Cache - Control "public, max-age=0, must-revalidate" ; } location ^ ~ / page - data { add_header Cache - Control "public, max-age=0, must-revalidate" ; } location ^ ~ / static { add_header Cache - Control "public, max-age=31556926, immutable" ; } location ~ * \ . ( js | css | woff | woff2 | ttf | otf | eot | png | apng | bmp | gif | ico | cur | jpg | jpeg | jfif | swf | pjpeg | pjp | svg | tif | tiff | webp | wav | webm | ogg | mp3 | mp4 | wmv | mov | avi | flv | vtt ) $ { add_header Cache - Control "public, max-age=31556926, immutable" ; } error_page 404 / 404. html ; error_page 500 502 503 504 / 50 x . html ; location = / 50 x . html { root / usr / share / nginx / html ; } }

Dockerfile

We’ll run two environments, on the development side we’ll be bind mounting the project directory and running as root, and on the production side we’ll build the Go healthcheck and Gatsby site then pass them over to a non-root NGINX container, so we’ll need a separate Dockerfile for each environment.

Development

Not much special about this, except the CHOKIDAR_USEPOLLING environment variable that fixes hot-reload by using polling instead of fsevents or inotify, since neither will work in a Docker bind mount.

Dockerfile.dev

FROM node : 11 - alpine RUN apk add - - no - cache - - virtual .gyp python make g++ WORKDIR /app COPY package.json package - lock.json ./ RUN npm install RUN npm install - - global gatsby - cli ENV NODE_ENV=development ENV CHOKIDAR_USEPOLLING=1 EXPOSE 8000 CMD [ "npm" , "run" , "develop" ]

Production

Here we are using two build steps, for the Go healthcheck (the extra flags keep the executable smaller) and Gatsby site, pulling in our NGINX configurations, and setting the user and environment variables.

Dockerfile

FROM node : 11 - alpine AS builder_gatsby RUN apk add - - no - cache - - virtual .gyp python make g++ WORKDIR /app ENV NODE_ENV=production COPY package.json package - lock.json ./ RUN npm install COPY . . RUN npm run build FROM golang : alpine AS builder_go WORKDIR $GOPATH COPY ./docker/healthcheck ./ RUN CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build - ldflags= "-w -s" - o /go/bin/healthcheck FROM nginx : alpine COPY - - from=builder_gatsby - - chown=nginx : nginx /app/public /usr/share/nginx/html COPY - - from=builder_go - - chown=nginx : nginx /go/bin/healthcheck /usr/local/bin/healthcheck COPY ./docker/nginx/nginx.conf /etc/nginx/nginx.conf COPY ./docker/nginx/default.conf /etc/nginx/conf.d/default.conf RUN chown - R nginx : nginx /var/cache/nginx ENV NODE_ENV=production ENV HEALTHCHECK_PORT=8000 USER nginx EXPOSE 8000 HEALTHCHECK - - start - period=15s - - interval=1m - - timeout=5s CMD /usr/local/bin/healthcheck

Docker Compose

Development

The trick here is to create volumes for node_modules , public , and .cache before bind mounting the project directory, this way the container runs it’s own version of those directories, to prevent any Windows/Linux conflicts.

docker-compose.dev.yml

version : "3.7" services : dev : volumes : - type : volume source : node_modules target : /app/node_modules - type : volume source : public target : /app/public - type : volume source : cache target : /app/.cache - type : bind source : . target : /app networks : - net1 build : context : . dockerfile : Dockerfile.dev ports : - "8000:8000" volumes : node_modules : public : cache : networks : net1 : name : cloud - native - gatsby

Production

Nothing fancy here, all the work was done in the image created by the Dockerfile.

docker-compose.yml

version : "3.7" services : prod : build : context : . dockerfile : Dockerfile ports : - "8000:8000"

NPM Scripts

First, we need to add -H 0.0.0.0 to the develop script so we can access the site in the development environment.

"develop" : "gatsby develop -H 0.0.0.0" ,

Now that the configurations are all set to go, let’s make them easy to use with some more NPM scripts.

Development

"start:dev" : "docker-compose -f docker-compose.dev.yml up --remove-orphans -d" , "stop:dev" : "docker-compose -f docker-compose.dev.yml stop" , "build:dev" : "docker-compose -f docker-compose.dev.yml build" , "clean:dev" : "docker-compose -f docker-compose.dev.yml down --rmi local --volumes --remove-orphans" ,

Production

"start:prod" : "docker-compose -f docker-compose.yml up --build --force-recreate --remove-orphans -d" , "stop:prod" : "docker-compose -f docker-compose.yml stop" , "build:prod" : "docker-compose -f docker-compose.yml build" , "clean:prod" : "docker-compose -f docker-compose.yml down --rmi local --volumes --remove-orphans" ,

Publishing

I keep a local production Docker host with a private registry to manage deployments, it uses ouroboros to automatically spin up new images after they land in the registry. Feel free to write your own publish script here!

"publish" : "docker tag cloudnativedocker_prod your.docker.registry:5000/cloudnativedocker:latest && docker push your.docker.registry:5000/cloudnativedocker:latest && docker rmi your.docker.registry:5000/cloudnativedocker:latest" ,

Standardize Formatting

Before my first git commit , I try to remember to run prettier --write ./**/* , this will format all the code in the project that Prettier can in accordance with your .prettierrc file. Doing this now will ensure that any future commits will be clean and easy to read because there aren’t formatting changes being added to them.

Full-Stack NextJS

Now that we’ve got our feet wet, let’s dive into a full-stack setup. In lieu of adding yet another to-do app to the internets, let’s make a really basic issue tracker, I’ll omit authentication here since it’s already a pretty long article, but I do plan to cover authentication in a dedicated article in the future.

For your reference, here is a GitHub repository of the finished project:

Initialize

Create a new folder on your desktop, I’ll go with cloud-native-next , and open it up in VSCodium, run git init and npm init -y in the root directory of the project to create a git repository and package.json file.

Create the folders db , nginx , and server in the root directory of the project, and then run npx create-next-app client to initialize a new NextJS app called client .

You should now have four folders db , nginx , server , and client , as well as a package.json file in the root of the project.

I’ll omit my Prettier setup here, see above from Gatsby if you skipped it since the setup is the same.

Server

First, we need a .gitignore file in the server directory, we should only need node_modules in this one.

We’ll also want a .dockerignore file here with node_modules in it.

Let’s get our GraphQL API up and running, we’ll start by installing the necessary packages.

Initialize

cd server npm init -y npm install --save express graphql apollo-server apollo-server-express mongoose npm install --save-dev nodemon morgan @babel/core @babel/node @babel/preset-env @babel/register

We need a basic Babel config, create the file .babelrc in the server directory with the following contents:

{ "presets" : [ [ "@babel/preset-env" , { "targets" : { "node" : "current" } } ] ] }

Models

We’ll create the folder server/src/models/ , and a new file issue.js in it. To make it interesting let’s have an activity feed for each issue. So the issue schema will need a title and a description, we’ll use a virtual field to pull in feed entries (from another model we’ll create next), sort them by newest to oldest, and make sure they get deleted when this issue is deleted. We’ll also create an “Issue created” feed entry automatically.

You should make a point to skim the Mongoose documentation, it’s a really powerful tool with a ton of useful features!

server/src/models/issue.js

import mongoose from 'mongoose' const issueSchema = new mongoose . Schema ( { title : { type : String , required : [ true , 'Issue must have a title!' ] , } , description : { type : String , } , } ) issueSchema . virtual ( 'feed' , { ref : 'Feed' , localField : '_id' , foreignField : 'issue' , options : { sort : { timestamp : - 1 , } , } , } ) issueSchema . pre ( 'save' , function ( next ) { if ( this . isNew ) { this . model ( 'Feed' ) . create ( { issue : this . id , message : 'Issue created.' , } , next ( ) , ) } } ) issueSchema . pre ( 'remove' , function ( next ) { this . model ( 'Feed' ) . deleteMany ( { issue : this . id } , next ) } ) const Issue = mongoose . model ( 'Issue' , issueSchema ) export default Issue

Now for the Feed model, we’ll keep it simple with a reference to the issue it’s related to, a timestamp, and a message.

server/src/models/feed.js

import mongoose from 'mongoose' const feedSchema = new mongoose . Schema ( { issue : { type : mongoose . Schema . Types . ObjectId , ref : 'Issue' , required : [ true , 'Feed must have an associated issue!' ] , } , message : { type : String , required : [ true , 'Feed must have a message!' ] , } , timestamp : { type : Date , } , } ) feedSchema . pre ( 'save' , function ( next ) { if ( this . isNew ) { this . timestamp = new Date ( Date . now ( ) ) } next ( ) } ) const Feed = mongoose . model ( 'Feed' , feedSchema ) export default Feed

Now we need to set up the database configuration, we’ll also export our models from this file for ease of use later. The extra options in the Mongoose connection just disable some deprecated features and silence the warnings in the console.

server/src/models/index.js

import mongoose from 'mongoose' import Issue from './issue' import Feed from './feed' const connectDb = ( ) => { return mongoose . connect ( process . env . DATABASE_URL , { useNewUrlParser : true , useUnifiedTopology : true , useFindAndModify : true , useCreateIndex : true , } ) } const models = { Issue , Feed } export { connectDb } export default models

Schema

Next, we’ll need a schema folder for our GraphQL schemas. We’re using extend type for query , and mutation , because we’ll tie all of them together to export from an index file as with the models, and you can only have one of each type declared. We’ll also add a Healthcheck schema for Docker.

server/src/schema/issue.js

import { gql } from 'apollo-server-express' export default gql ` extend type Query { issues: [Issue!]! issue(id: ID!): Issue! } extend type Mutation { createIssue(title: String!, description: String): Issue! removeIssue(id: ID!): ID! } type Issue { id: ID! title: String! description: String feed: [Feed!] } `

I’ll skip adding a Date scalar to save time, it’s fairly easy to do, but this is already a long article!

server/src/schema/feed.js

import { gql } from 'apollo-server-express' export default gql ` extend type Mutation { createFeed(issue: ID!, message: String!): Feed! } type Feed { id: ID! message: String! # This should really use a Date scalar! timestamp: String! } `

server/src/schema/healthcheck.js

import { gql } from 'apollo-server-express' export default gql ` extend type Query { healthcheck: Boolean! } `

server/src/schema/index.js

import { gql } from 'apollo-server-express' import healthcheckSchema from './healthcheck' import issueSchema from './issue' import feedSchema from './feed' const baseSchema = gql ` type Query { _: Boolean } type Mutation { _: Boolean } ` export default [ baseSchema , healthcheckSchema , issueSchema , feedSchema , ]

Resolvers

Keeping with our pattern, we’ll now make a server/src/resolvers directory to work out of for the GraphQL resolvers. We’ll deliver the models object via context to the resolvers in the apollo-express configuration later, so, for now, we’ll just assume they’re there.

server/src/resolvers/healthcheck.js

export default { Query : { healthcheck : ( parent , args , context ) => { return true } , } , }

The parent, args, context is not needed here, I included it to illustrate what properties are made available to the resolvers.

for the Issue queries we have issues , which should just return a list of issues, but the issue query should also return that issue’s feed, so we need to populate it with mongoose.

server/src/resolvers/issue.js

import { UserInputError } from 'apollo-server' export default { Query : { issues : async ( parent , args , { models } ) => { return await models . Issue . find ( { } ) } , issue : async ( parent , { id } , { models } ) => { const issue = await models . Issue . findById ( id ) . populate ( { path : 'feed' , } ) if ( ! issue ) throw new UserInputError ( 'Could not find issue!' ) return issue } , } , Mutation : { createIssue : async ( parent , { title , description } , { models } , ) => { const issue = await models . Issue . create ( { title , description , } ) return issue } , removeIssue : async ( parent , { id } , { models } ) => { const issue = await models . Issue . findById ( id ) if ( ! issue ) throw new UserInputError ( 'Could not find issue!' ) try { await issue . remove ( ) } catch { return null } return issue . id } , } , }

server/src/resolvers/feed.js

import { UserInputError } from 'apollo-server' export default { Mutation : { createFeed : async ( parent , { issue , message } , { models } ) => { const issueObj = await models . Issue . find ( { _id : issue } ) . select ( '_id' , ) if ( ! issueObj ) throw new UserInputError ( 'Invalid issue ID' ) const feed = await models . Feed . create ( { issue , message , } ) return feed } , } , }

server/src/resolvers/index.js

import healthcheckResolvers from './healthcheck' import issueResolvers from './issue' import feedResolvers from './feed' export default [ healthcheckResolvers , issueResolvers , feedResolvers ]

Express

Now we just tie all that together, connect to the database, and start our Apollo GraphQL server.

server/src/index.js

import { ApolloServer } from 'apollo-server-express' import express from 'express' import http from 'http' import morgan from 'morgan' import models , { connectDb } from './models' import resolvers from './resolvers' import schema from './schema' const app = express ( ) app . use ( morgan ( 'dev' ) ) const server = new ApolloServer ( { introspection : true , typeDefs : schema , resolvers , context : { models } , } ) server . applyMiddleware ( { app , path : '/graphql' } ) const httpServer = http . createServer ( app ) server . installSubscriptionHandlers ( httpServer ) const port = process . env . PORT || 3000 connectDb ( ) . then ( async ( ) => { httpServer . listen ( { port } , ( ) => { console . log ( ` Apollo Server on http://localhost: ${ port } /graphql ` ) } ) } )

Healthcheck

I’ll use NodeJS for this healthcheck since it’ll already be installed on the container, no need to add any more complexity to it, we’ll just hit our GraphQL server with the healthcheck query and make sure it returns true . While it takes some extra effort I try to avoid using any installed packages for healthchecks when possible, it keeps it cleaner and more portable since this healthcheck only relies on the built-in library for NodeJS I can copy to any project without worrying about dependencies.

server/healthcheck.js

const http = require ( 'http' ) const port = process . env . PORT || 3000 const query = '{ "query": "{ healthcheck }" }' const options = { hostname : 'localhost' , port , path : '/graphql' , method : 'POST' , headers : { 'Content-Type' : 'application/json' , } , } const req = http . request ( options , ( res ) => { res . setEncoding ( 'utf8' ) res . on ( 'data' , ( chunk ) => { const { data } = JSON . parse ( chunk ) if ( data . healthcheck === true ) { process . exit ( 0 ) } else { process . exit ( 1 ) } } ) } ) req . on ( 'error' , ( ) => { process . exit ( 1 ) } ) req . write ( query ) req . end ( )

Dockerfile

I’m correcting the time zones for all containers in this project, just because it’s a common issue I tend to overlook, make sure you adjust the location as appropriate to your location.

Development

server/Dockerfile.dev

FROM node : 10 - alpine RUN apk - - no - cache add - - virtual native - deps \ g++ gcc libgcc libstdc++ linux - headers autoconf automake make nasm python git tzdata && \ npm install - - quiet node - gyp - g RUN ln - s /usr/share/zoneinfo/America/Phoenix /etc/localtime RUN echo "America/Phoenix" > /etc/timezone RUN mkdir - p /home/apollo/app WORKDIR /home/apollo/app COPY ./package*.json ./ RUN npm install ENV NODE_ENV=development EXPOSE 3000 CMD [ "npm" , "run" , "docker-dev" ] HEALTHCHECK - - start - period=15s - - interval=1m - - timeout=5s CMD [ "node" , "healthcheck" ]

Production

server/Dockerfile

FROM node : 10 - alpine as builder RUN mkdir /app WORKDIR /app COPY ./package*.json ./ RUN npm install FROM node : 10 - alpine RUN apk - - no - cache add - - virtual native - deps \ g++ gcc libgcc libstdc++ linux - headers autoconf automake make nasm python git tzdata && \ npm install - - quiet node - gyp - g RUN ln - s /usr/share/zoneinfo/America/Phoenix /etc/localtime RUN echo "America/Phoenix" > /etc/timezone RUN addgroup - g 101 - S apollo RUN adduser - D - - home /home/apollo - u 101 - S apollo - G apollo RUN mkdir /home/apollo/app WORKDIR /home/apollo/app COPY - - from=builder /app . ENV NODE_ENV=production COPY . . RUN mkdir /home/apollo/app/node_modules/.cache RUN chown - R apollo. /home/apollo/app/node_modules/.cache USER apollo EXPOSE 3000 CMD [ "npm" , "start" ] HEALTHCHECK - - start - period=15s - - interval=1m - - timeout=5s CMD [ "node" , "healthcheck" ]

NPM Scripts

Add some NPM scripts in the server/package.json file. The fuser -k... bit is a hack to get nodemon ’s live reload to work correctly in Docker for Windows, without this it will fail saying the port is in use.

Development

"docker-dev" : "nodemon --watch ./src -e js -L --delay 80ms --exec 'fuser -k 56745/tcp; babel-node --inspect=0.0.0.0:56745 src/index.js'"

Production

"start" : "nodemon --exec babel-node src/index.js" ,

Database

Not much exciting here, just setting the timezone and using MongoDB’s own “ping” command for a healthcheck.

Dockerfile

db/Dockerfile

FROM mongo : 4.2 RUN rm /etc/localtime RUN ln - s /usr/share/zoneinfo/America/Phoenix /etc/localtime RUN echo "America/Phoenix" > /etc/timezone RUN dpkg - reconfigure - - frontend noninteractive tzdata HEALTHCHECK - - start - period=15s - - interval=1m - - timeout=5s CMD [ "echo" , "'db.runCommand(\"ping\").ok'" , "|" , "mongo" , "localhost:27017" , "--quiet" ]

NGINX

We’ll use NGINX to reverse proxy requests for any URI starting with /graphql to the back-end, and any other requests to the front-end.

Healthcheck

I believe Go is the lightest way to implement a healthcheck for this container, so we’ll just use the same one from the Gatsby example.

nginx/healthcheck.go

package main import ( "fmt" "net/http" "os" ) func main ( ) { response , err := http . Get ( fmt . Sprintf ( "http://127.0.0.1:%s" , os . Getenv ( "HEALTHCHECK_PORT" ) ) ) if err != nil { os . Exit ( 1 ) } if response . StatusCode != 200 { os . Exit ( 1 ) } os . Exit ( 0 ) }

NGINX Server Configuration

Again we’re removing the user configuration from NGINX so we can run it as a non-root user.

nginx/nginx.conf

worker_processes 1 ; error_log / var / log / nginx / error . log warn ; pid / tmp / nginx . pid ; events { worker_connections 1024 ; } http { include mime . types ; default_type application / octet - stream ; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"' ; access_log / dev / stdout main ; sendfile on ; tcp_nopush on ; keepalive_timeout 65 ; gzip on ; include / etc / nginx / conf . d / * . conf ; }

NGINX Site Configuration

These are the reverse proxy configurations, note that both support web-socket connections in case we wanted to implement GraphQL subscriptions, hot-reloading in develop mode also depends on web-sockets.

nginx/default.conf

upstream client { server client : 3000 ; } upstream server { server server : 3000 ; } server { listen 3000 default_server ; charset utf - 8 ; access_log / dev / stdout ; location / { proxy_read_timeout 36000 s ; proxy_http_version 1.1 ; proxy_buffering off ; client_max_body_size 0 ; proxy_redirect off ; proxy_set_header Upgrade $http_upgrade ; proxy_set_header Connection "upgrade" ; proxy_set_header Host $host ; proxy_set_header X - Real - IP $remote_addr ; proxy_set_header X - Forwarded - For $proxy_add_x_forwarded_for ; proxy_set_header X - Forwarded - Host $host ; proxy_set_header X - Forwarded - Proto $scheme ; proxy_hide_header X - Powered - By ; proxy_pass http : / / client ; } location ^ ~ / graphql { proxy_read_timeout 36000 s ; proxy_http_version 1.1 ; proxy_buffering off ; client_max_body_size 0 ; proxy_redirect off ; proxy_set_header Upgrade $http_upgrade ; proxy_set_header Connection "upgrade" ; proxy_set_header Host $host ; proxy_set_header X - Real - IP $remote_addr ; proxy_set_header X - Forwarded - For $proxy_add_x_forwarded_for ; proxy_set_header X - Forwarded - Host $host ; proxy_set_header X - Forwarded - Proto $scheme ; proxy_hide_header X - Powered - By ; proxy_pass http : / / server ; } error_page 404 500 502 503 504 / 50 x . html ; location = / 50 x . html { root / usr / share / nginx / html ; } }

Dockerfile

Hopefully this is starting to look familiar to you?

nginx/Dockerfile

FROM golang : alpine AS builder_go WORKDIR $GOPATH COPY ./healthcheck.go . RUN CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build - ldflags= "-w -s" - o /go/bin/healthcheck FROM nginx : alpine COPY - - from=builder_go - - chown=nginx : nginx /go/bin/healthcheck /usr/local/bin/healthcheck COPY ./nginx.conf /etc/nginx/nginx.conf COPY ./default.conf /etc/nginx/conf.d/default.conf RUN chown - R nginx : nginx /var/cache/nginx RUN apk - - no - cache add - - virtual native - deps tzdata RUN ln - s /usr/share/zoneinfo/America/Phoenix /etc/localtime RUN echo "America/Phoenix" > /etc/timezone ENV HEALTHCHECK_PORT=3000 USER nginx EXPOSE 3000 HEALTHCHECK - - start - period=15s - - interval=1m - - timeout=5s CMD [ "/usr/local/bin/healthcheck" ]

Client

We’ll just get the front-end Dockerized, for now, once it’s up and running we can build our issue tracker app’s front-end.

Healthcheck

client/healthcheck.js

const http = require ( 'http' ) http . get ( 'http://localhost:3000' , ( res ) => { if ( res . statusCode !== 200 ) { process . exit ( 1 ) } else { process . exit ( 0 ) } } )

Dockerfile

Don’t forget about the .dockerignore file! We need node_modules and .next in this one.

Development

client/Dockerfile.dev

FROM node : 10 - alpine RUN apk - - no - cache add - - virtual native - deps tzdata RUN ln - s /usr/share/zoneinfo/America/Phoenix /etc/localtime RUN echo "America/Phoenix" > /etc/timezone RUN mkdir - p /home/nextjs/app WORKDIR /home/nextjs/app COPY ./package*.json ./ RUN npm install ENV NODE_ENV=development EXPOSE 3000 CMD [ "npm" , "run" , "dev" ] HEALTHCHECK - - start - period=15s - - interval=1m - - timeout=5s CMD [ "node" , "healthcheck" ]

Production

client/Dockerfile

FROM node : 10 - alpine RUN apk - - no - cache add - - virtual native - deps tzdata RUN ln - s /usr/share/zoneinfo/America/Phoenix /etc/localtime RUN echo "America/Phoenix" > /etc/timezone RUN addgroup - g 101 - S nextjs RUN adduser - D - - home /home/nextjs - u 101 - S nextjs - G nextjs RUN mkdir /home/nextjs/app WORKDIR /home/nextjs/app COPY . . RUN npm install ENV NODE_ENV=production RUN npm run build RUN chown - R nextjs. /home/nextjs/app/.next USER nextjs EXPOSE 3000 CMD [ "npm" , "start" ] HEALTHCHECK - - start - period=15s - - interval=1m - - timeout=5s CMD [ "node" , "healthcheck" ]

NextJS Configuration

Here is the fix for hot-reloading, which is applied only if NODE_ENV is not set to production via a ternary operator.

client/next.config.js

const watchOptions = process . env . NODE_ENV === 'production' ? { } : { poll : 800 , aggregateTimeout : 300 } module . exports = { webpackDevMiddleware : ( config ) => { config . watchOptions = watchOptions return config } , }

Docker Compose

Development

docker-compose.dev.yml

version : '3.7' services : db_dev : volumes : - type : volume source : db - data_dev target : /data/db - type : volume source : db - config_dev target : /data/configdb networks : net1 : aliases : - db ports : - '2999:27017' build : context : ./db dockerfile : Dockerfile server_dev : depends_on : - db_dev volumes : - type : volume source : server - node_modules_dev target : /home/apollo/app/node_modules - type : bind source : ./server target : /home/apollo/app environment : - DATABASE_URL=mongodb : //db : 27017/cloud - native - next - PORT=3000 build : context : ./server dockerfile : Dockerfile.dev networks : net1 : aliases : - server client_dev : depends_on : - server_dev volumes : - type : volume source : client - node_modules_dev target : /home/nextjs/app/node_modules - type : volume source : client - cache_dev target : /home/nextjs/app/.next - type : bind source : ./client target : /home/nextjs/app build : context : ./client dockerfile : Dockerfile.dev networks : net1 : aliases : - client nginx_dev : depends_on : - server_dev - client_dev build : context : ./nginx dockerfile : Dockerfile ports : - '3000:3000' networks : net1 : aliases : - nginx networks : net1 : name : cloud - native - next volumes : db-data_dev : db-config_dev : server-node_modules_dev : client-node_modules_dev : client - cache_dev :

Production

docker-compose.yml

version : '3.7' services : db_prod : volumes : - type : volume source : db - data_prod target : /data/db - type : volume source : db - config_prod target : /data/configdb networks : net1 : aliases : - db build : context : ./db dockerfile : Dockerfile server_prod : depends_on : - db_prod build : context : ./server dockerfile : Dockerfile environment : - DATABASE_URL=mongodb : //db : 27017/cloud - native - next - PORT=3000 networks : net1 : aliases : - server client_prod : depends_on : - server_prod build : context : ./client dockerfile : Dockerfile networks : net1 : aliases : - client nginx_prod : depends_on : - server_prod - client_prod build : context : ./nginx dockerfile : Dockerfile ports : - '3000:3000' networks : net1 : aliases : - nginx networks : net1 : name : cloud - native - next volumes : db-data_prod : db - config_prod :

Main NPM Scripts

These Docker scripts go in the package.json file at the root of the project.

Development

"start:dev" : "docker-compose -f docker-compose.dev.yml up --remove-orphans -d" , "stop:dev" : "docker-compose -f docker-compose.dev.yml stop" , "clean:dev" : "docker-compose -f docker-compose.dev.yml down --rmi local --volumes --remove-orphans" , "build:dev" : "docker-compose -f docker-compose.dev.yml build" ,

Production

"start:prod" : "docker-compose -f docker-compose.yml up --force-recreate --remove-orphans -d" , "stop:prod" : "docker-compose -f docker-compose.yml stop" , "clean:prod" : "docker-compose -f docker-compose.yml down --rmi local --volumes --remove-orphans" , "build:prod" : "docker-compose -f docker-compose.yml build" ,

I also like to add individual production build scripts for each container, so you don’t have to build the whole project if you only changed something on the front-end for example.

"build:prod:db" : "docker-compose -f docker-compose.yml build db_prod" , "build:prod:server" : "docker-compose -f docker-compose.yml build server_prod" , "build:prod:client" : "docker-compose -f docker-compose.yml build client_prod" , "build:prod:nginx" : "docker-compose -f docker-compose.yml build nginx_prod" ,

Bring up environments

Okay, now we run npm run start:dev to spin up the containers. It’s helpful to open a couple of terminals to watch the logs for the client and server, you can do that with docker logs cloud-native-next_server_dev_1 -f , and docker logs cloud-native-next_client_dev_1 -f .

You should now have access to GraphQL Explorer on the back-end at http://localhost:3000/graphql , and the NextJS front-end at http://localhost:3000/ .

Front-End Application

Now we can start creating, I’ll be using Semantic-UI to quickly create a user interface that looks nice and clean, they have some great documentation you may reference if you wish.

Install Dependencies

We need to install some dependencies on the client-side, which we can do from the running dev container by using the command docker exec -it cloud-native-next_client_dev_1 /bin/sh to open up a terminal session within it.

npm install --save \ @apollo/react-hooks \ @apollo/react-ssr \ apollo-cache-inmemory \ apollo-client \ apollo-link-http \ apollo-utilities \ graphql \ graphql-tag \ https-proxy-agent \ isomorphic-unfetch \ semantic-ui-react

When it’s finished, you can get out of the container session using exit .

Library

To keep things organized, let’s put together a library first in a new directory client/lib/ .

Apollo Client

This file gets a little hairy, there are a lot of moving parts to get Apollo working with both server-side rendering (SSR) and client-side rendering (CSR), but don’t worry if you don’t understand it completely, I copied this from the Zeit example and had to study it quite a bit to understand what it’s doing.

This is mostly copied from the Zeit example on GitHub. I only changed the uri of HttpLink to handle both SSR and client side paths.

client/lib/with-apollo.js

import React from 'react' import Head from 'next/head' import { ApolloProvider } from '@apollo/react-hooks' import { ApolloClient } from 'apollo-client' import { InMemoryCache } from 'apollo-cache-inmemory' import { HttpLink } from 'apollo-link-http' import fetch from 'isomorphic-unfetch' let globalApolloClient = null export function withApollo ( PageComponent , { ssr = true } = { } ) { const WithApollo = ( { apolloClient , apolloState , ... pageProps } ) => { const client = apolloClient || initApolloClient ( apolloState ) return ( < ApolloProvider client = { client } > < PageComponent { ... pageProps } / > < / ApolloProvider > ) } if ( process . env . NODE_ENV !== 'production' ) { const displayName = PageComponent . displayName || PageComponent . name || 'Component' if ( displayName === 'App' ) { console . warn ( 'This withApollo HOC only works with PageComponents.' , ) } WithApollo . displayName = ` withApollo( ${ displayName } ) ` } if ( ssr || PageComponent . getInitialProps ) { WithApollo . getInitialProps = async ( ctx ) => { const { AppTree } = ctx const apolloClient = ( ctx . apolloClient = initApolloClient ( ) ) let pageProps = { } if ( PageComponent . getInitialProps ) { pageProps = await PageComponent . getInitialProps ( ctx ) } if ( typeof window === 'undefined' ) { if ( ctx . res && ctx . res . finished ) { return pageProps } if ( ssr ) { try { const { getDataFromTree } = await import ( '@apollo/react-ssr' ) await getDataFromTree ( < AppTree pageProps = { { ... pageProps , apolloClient , } } / > , ) } catch ( error ) { console . error ( 'Error while running `getDataFromTree`' , error , ) } Head . rewind ( ) } } const apolloState = apolloClient . cache . extract ( ) return { ... pageProps , apolloState , } } } return WithApollo } function initApolloClient ( initialState ) { if ( typeof window === 'undefined' ) { return createApolloClient ( initialState ) } if ( ! globalApolloClient ) { globalApolloClient = createApolloClient ( initialState ) } return globalApolloClient } function createApolloClient ( initialState = { } ) { return new ApolloClient ( { ssrMode : typeof window === 'undefined' , link : new HttpLink ( { uri : typeof window === 'undefined' ? 'http://server:3000/graphql' : '/graphql' , credentials : 'same-origin' , fetch , } ) , cache : new InMemoryCache ( ) . restore ( initialState ) , } ) }

GraphQL

client/lib/queries.js

import gql from 'graphql-tag' export const ISSUE_LIST = gql ` query { issues { id title description } } ` export const ISSUE_DETAIL = gql ` query($id: ID!) { issue(id: $id) { id title description feed { id message timestamp } } } `

client/lib/mutations.js

import gql from 'graphql-tag' export const ADD_ISSUE = gql ` mutation($title: String!, $description: String) { createIssue(title: $title, description: $description) { id title description } } ` export const REMOVE_ISSUE = gql ` mutation($id: ID!) { removeIssue(id: $id) } ` export const ADD_FEED = gql ` mutation($issue: ID!, $message: String!) { createFeed(issue: $issue, message: $message) { id message timestamp } } `

Utilities

I could use MomentJS ( npm install moment ) for any date manipulations I need to perform, but since I only need it for one format I’ll save some bundle size by writing my own function. The getIdAttrib function attempts to find and return the value of the data-id attribute of any HTML element passed into it, it’s a convenient way to pass data around in React, you’ll see what I mean later on.

client/lib/util.js

export const formatDate = ( { dateString } ) => { const stringPad = ( value ) => { const string = '0' + value return string . slice ( - 2 ) } const date = new Date ( parseInt ( dateString ) ) const y = date . getFullYear ( ) const m = stringPad ( date . getMonth ( ) + 1 ) const d = stringPad ( date . getDate ( ) ) const h = date . getHours ( ) const M = stringPad ( date . getMinutes ( ) ) const s = stringPad ( date . getSeconds ( ) ) return ` ${ y } / ${ m } / ${ d } ${ h > 12 ? h - 12 : h } : ${ M } : ${ s } ${ h > 12 ? 'PM' : 'AM' } ` } export const getIdAttrib = ( element ) => { const result = element . dataset . id || null if ( ! result ) { console . log ( 'Element has no "data-id" attribute!' ) } return result }

Index

Keeping with our pattern, we’ll make in index file for our lib directory to export everything.

client/lib/index.js

import * as queries from './queries' import * as mutations from './mutations' import * as util from './util' export { queries , mutations , util } export { withApollo } from './with-apollo'

User Interface

We’ll need a form to add issues, the update function in the useMutation hook is for updating the Apollo cache with the new data that is submitted (and the elements on the page as a result), it receives the current cache and the response of the mutation as arguments, which we’re pulling createIssue from directly, we then read the query and write it back into cache with the new data appended.

client/components/add-issue.js

import React , { useState } from 'react' import { Container , Button , Input , Message } from 'semantic-ui-react' import { useMutation } from '@apollo/react-hooks' import { queries , mutations } from '../lib' const AddIssue = ( ) => { const [ titleField , setTitleField ] = useState ( '' ) const [ descriptionField , setDescriptionField ] = useState ( '' ) const [ addIssue , { error , loading } ] = useMutation ( mutations . ADD_ISSUE , { variables : { title : titleField , description : descriptionField , } , update : ( cache , { data : { createIssue } } ) => { const data = cache . readQuery ( { query : queries . ISSUE_LIST , } ) cache . writeQuery ( { query : queries . ISSUE_LIST , data : { ... data , issues : [ ... data . issues , createIssue ] , } , } ) } , } , ) const updateTitle = ( event , { value } ) => setTitleField ( value ) const updateDescription = ( event , { value } ) => setDescriptionField ( value ) const handleSubmit = ( event ) => { event . preventDefault ( ) addIssue ( ) . catch ( ( error ) => { console . log ( ` Failed to add issue: ${ error . message || error } ` ) } ) setTitleField ( '' ) setDescriptionField ( '' ) } return ( < Container textAlign = 'center' > < form onSubmit = { handleSubmit } > < Input style = { { margin : '0.5rem' } } type = 'text' label = 'Title' value = { titleField } onChange = { updateTitle } / > < Input style = { { margin : '0.5rem' } } type = 'text' label = 'Description' value = { descriptionField } onChange = { updateDescription } / > < Button type = 'submit' disabled = { loading || ! titleField . length } loading = { loading } style = { { margin : '0.5rem' } } primary > Add Issue < / Button > < / form > { ! loading && ! ! error && ( < Message error > { ` Failed to add issue: ${ error . message || error } ` } < / Message > ) } < / Container > ) } export default AddIssue

We’ll use the Confirm component from Semantic-UI for removing issues, it’s essentially a dialog box that darkens everything behind it. We’ll trigger it when the removeIssueId state is not null. After confirmation we also want to hide the issue detail component if it’s displaying the issue that was removed, that component is triggered by the issueDetailId being non-null as you may have guessed.

client/components/remove-issue.js

import React from 'react' import { Confirm } from 'semantic-ui-react' import { useMutation } from '@apollo/react-hooks' import { queries , mutations } from '../lib' const RemoveIssue = ( { removeIssueId , setRemoveIssueId , issueDetailId , setIssueDetailId , } ) => { const [ removeIssue ] = useMutation ( mutations . REMOVE_ISSUE , { variables : { id : removeIssueId , } , update : ( cache , { data : { removeIssue } } ) => { const data = cache . readQuery ( { query : queries . ISSUE_LIST , } ) cache . writeQuery ( { query : queries . ISSUE_LIST , data : { ... data , issues : data . issues . filter ( ( { id } ) => { return id !== removeIssue } ) , } , } ) } , } ) const handleCancel = ( ) => setRemoveIssueId ( null ) const handleConfirm = ( ) => { if ( removeIssueId === issueDetailId ) setIssueDetailId ( null ) removeIssue ( ) . catch ( ( error ) => { console . log ( ` Failed to remove issue: ${ error . message || error } ` ) } ) setRemoveIssueId ( null ) } return ( < Confirm open = { ! ! removeIssueId } onCancel = { handleCancel } onConfirm = { handleConfirm } / > ) } export default RemoveIssue

We need to display the list of issues, you’ll see the use of the getIdAttrib function here, note the data-id={id} properties on the icons that bring up the issue detail view and the remove issue dialog box, we’re destructuring target from the event property passed to the handleShowDetail and handleRemoveIssue functions from each element.

client/components/issue-list.js

import React , { useState } from 'react' import { Container , Table , Message , Header , Icon , Popup , Segment , } from 'semantic-ui-react' import { useQuery } from '@apollo/react-hooks' import RemoveIssue from './remove-issue' import { queries , util } from '../lib' import AddIssue from './add-issue' const IssueList = ( { issueDetailId , setIssueDetailId } ) => { const [ removeIssueId , setRemoveIssueId ] = useState ( ) const { data } = useQuery ( queries . ISSUE_LIST ) const { issues } = data || [ ] const handleShowDetail = ( { target } ) => setIssueDetailId ( util . getIdAttrib ( target ) ) const handleRemoveIssue = ( { target } ) => setRemoveIssueId ( util . getIdAttrib ( target ) ) return ( < Container > < AddIssue / > < Header as = 'h2' attached = 'top' > Issues < / Header > { ! ! issues && ! ! issues . length && ( < Table celled padded attached = 'bottom' > < Table . Header > < Table . Row > < Table . HeaderCell > Title < / Table . HeaderCell > < Table . HeaderCell > Description < / Table . HeaderCell > < / Table . Row > < / Table . Header > < Table . Body > { issues . map ( ( { id , title , description } ) => ( < Table . Row key = { id } > < Table . Cell > < Popup content = 'View Issue Feed' position = 'top center' trigger = { < Icon onClick = { handleShowDetail } data - id = { id } link name = 'sticky note' / > } / > < Popup content = 'Remove Issue' position = 'top center' trigger = { < Icon onClick = { handleRemoveIssue } data - id = { id } link name = 'trash alternate' / > } / > { title } < / Table . Cell > < Table . Cell > { description } < / Table . Cell > < / Table . Row > ) ) } < / Table . Body > < / Table > ) } { ( ! issues || ! issues . length ) && ( < Segment attached = 'bottom' textAlign = 'center' > < Message color = 'blue' compact > Nice ! No Issues ! < / Message > < / Segment > ) } < RemoveIssue { ... { removeIssueId , setRemoveIssueId , issueDetailId , setIssueDetailId , } } / > < / Container > ) } export default IssueList

Here we’ll show the feed for a selected issue.

client/components/issue-detail.js

import React from 'react' import { Container , Header , Loader , Message , Icon , Popup , Feed , Segment , } from 'semantic-ui-react' import { useQuery } from '@apollo/react-hooks' import AddFeed from './add-feed' import { queries , util } from '../lib' const IssueDetail = ( { issueDetailId , setIssueDetailId } ) => { const { loading , error , data } = useQuery ( queries . ISSUE_DETAIL , { variables : { id : issueDetailId , } , } ) if ( ! ! loading ) return < Loader active / > if ( ! ! error ) return ( < Container style = { { marginTop : '1rem' } } > < Message error > { ` Failed to get client detail: ${ error . message || error } ` } < / Message > < / Container > ) if ( ! data . issue || ! data . issue . id ) return ( < Container style = { { marginTop : '1rem' } } > < Message error > { ` Failed to get client detail: Query returned empty data set. ` } < / Message > < / Container > ) const { issue } = data const handleClose = ( ) => setIssueDetailId ( null ) return ( < Container style = { { marginTop : '1rem' } } > < Header as = 'h2' attached = 'top' > Issue : { issue . title } < / Header > < Segment attached = 'bottom' > < div style = { { float : 'right' } } > < Popup content = 'Hide Issue Feed' position = 'top center' trigger = { < Icon onClick = { handleClose } link name = 'x' size = 'large' / > } / > < / div > < AddFeed { ... { id : data . issue . id } } / > < Feed > { issue . feed . map ( ( { id , message , timestamp } ) => { const date = util . formatDate ( { dateString : timestamp } ) return ( < Feed . Event key = { id } size = 'large' icon = 'thumbtack' date = { date } content = { message } / > ) } ) } < / Feed > < / Segment > < / Container > ) } export default IssueDetail

Another quick form to add feed entries.

client/components/add-feed.js

import React , { useState } from 'react' import { Container , Button , Input , Message } from 'semantic-ui-react' import { useMutation } from '@apollo/react-hooks' import { queries , mutations } from '../lib' const AddFeed = ( { id } ) => { const [ messageField , setMessageField ] = useState ( '' ) const [ addFeed , { error , loading } ] = useMutation ( mutations . ADD_FEED , { variables : { issue : id , message : messageField , } , update : ( cache , { data : { createFeed } } ) => { const data = cache . readQuery ( { query : queries . ISSUE_DETAIL , variables : { id } , } ) cache . writeQuery ( { query : queries . ISSUE_DETAIL , data : { ... data , issue : { ... data . issue , feed : [ createFeed , ... data . issue . feed ] , } , } , } ) } , } , ) const updateMessage = ( event , { value } ) => setMessageField ( value ) const handleSubmit = ( event ) => { event . preventDefault ( ) addFeed ( ) . catch ( ( error ) => { console . log ( ` Failed to add feed: ${ error . message || error } ` ) } ) setMessageField ( '' ) } return ( < Container style = { { marginBottom : '1rem' , } } > < form onSubmit = { handleSubmit } > < Input style = { { margin : '0.5rem' } } type = 'text' label = 'Message' value = { messageField } onChange = { updateMessage } / > < Button type = 'submit' disabled = { loading || ! messageField . length } loading = { loading } style = { { margin : '0.5rem' } } primary > Add Entry < / Button > < / form > { ! loading && ! ! error && ( < Message error > { ` Failed to add feed: ${ error . message || error } ` } < / Message > ) } < / Container > ) } export default AddFeed

Now we just need to tie it all together, after saving this file you should see the page update in your browser, everything should be working as expected now!

client/pages/index.js

import React , { useState } from 'react' import Head from 'next/head' import { withApollo } from '../lib' import { Container , Header } from 'semantic-ui-react' import IssueList from '../components/issue-list' import IssueDetail from '../components/issue-detail' const Home = ( ) => { const [ issueDetailId , setIssueDetailId ] = useState ( null ) return ( < div > < Head > < title > Issue Tracker < / title > < link rel = 'icon' href = '/favicon.ico' / > < link rel = 'stylesheet' href = '//cdn.jsdelivr.net/npm/semantic-ui@2.4.2/dist/semantic.min.css' / > < / Head > < Container fluid > < Header as = 'h1' dividing textAlign = 'center' style = { { padding : '1rem' } } > Issue Tracker < / Header > < IssueList { ... { issueDetailId , setIssueDetailId } } / > { ! ! issueDetailId && ( < IssueDetail { ... { issueDetailId , setIssueDetailId } } / > ) } < / Container > < / div > ) } export default withApollo ( Home )

Challenge

I want you to keep on learning, why not build something cool with that Gatsby site? Or maybe add some more features to the issue tracker app, it would probably benefit from having tags and/or status variables for each issue, maybe setting them to done/archived instead of deleting them? These choices are all yours to make, so build whatever satisfies your itch!

Conclusion

There you have it, you’ve now built two cloud-native web apps, nice job! The whole point of this process is that you can now clone the Git repository to any machine running Docker and start up your application, no need to worry about installing the right dependencies or configurations, and when you take it down you aren’t leaving anything behind on the base system, everything is nice and clean.