Don’t you hate it when deploying your app takes ages? Over a gigabyte for a single container image isn’t really what is viewed as best practice. Pushing billions of bytes around every time you deploy a new version doesn’t sound quite right for me.

TL;DR

This article will show you a few simple steps of how you can optimize your Docker images, making them smaller, faster and better suited for production.

The goal is to show you the size and performance difference between using default Node.js images and their optimized counterparts. Here’s the agenda.

Why Node.js?

Using the default Node.js image

Using the Node.js Alpine image

Excluding development dependencies

Using the base Alpine image

Using the builder pattern

Let’s jump in.

Why Node.js?

Node.js is currently the most versatile and beginner friendly environment to get started on the back end, and I write it as my primary language, so you’ll have to put up with it. Sue me, right. 😙

As an interpreted language, JavaScript doesn’t have a compiled target, like Go for example. There’s not much you can do to strip the size of your Node.js images. Or is there?

I’m here to prove that to be wrong. Picking the right base image for the job, only installing production dependencies for your production image, and of course, using the builder pattern are all ways you can drastically cut down the weight of your images.

In the examples below, I used a simple Node.js API I wrote a while back.

Using the default Node.js image

Starting out, of course, I used the default Node.js image pulling it from the Docker hub. Oh, how clueless I was.

FROM node

WORKDIR /usr/src/app

COPY package.json package-lock.json ./

RUN npm install

COPY . .

EXPOSE 3000

CMD [ "node", "app.js" ]

Want to guess the size? My jaw dropped. 727MB for a simple API!?