After exploring how to create and deploy Docker containers the old-school way, the time has come to leverage the power of docker-compose.

Docker Compose offers a whole bunch of advantages when it comes to orchestrating multiple containers. It automagically creates the private network, spins your containers up in the correct order if there are any dependencies, and deploying to amazons ECS is a breeze using ecs-cli.

In this post, I’ll run you through the best practices when it comes to creating a Dockerfile and a docker-compose.yml file and then connecting your webservice to an Ethereum node. We’ll cover the following steps:

Avoid unnecessary rebuilds of your node_modules folder and make sure your node_modules folder is being built inside your docker container

and make sure your node_modules folder is being built inside your docker container Implement mounted volumes and nodemon for live-reloading of your NodeJS application.

and for of your NodeJS application. Connect your NodeJS application to an Ethereum geth image.

Let’s start with the Dockerfile that describes your NodeJS container. We’ll be using the official 7.8.0 image, and set the logging level to warnings, as things can get quite verbose.

(TLDR — Source files here)

How to prevent re-builds of your node_modules folder

The trick to prevent docker from re-building your node_modules folder inside your docker container on every build process, is copying only the package.json file to your container, and then running npm install. Afterwards you can copy any other files you need and that might have changed to your container.

It’s important to make sure you run npm install inside your container, and not just copy/paste your node_modules folder from your host machine. In case you are developing on a windows machine, conflicts will arise for specific npm modules that build differently on different operating systems. In our case, the ethereum web3 package is one of those packages.

In general, not copying your node_modules is a best practice — so create the following .dockerignore file:

node_modules

How to live-reload your NodeJS application

To achieve this, we’re going to need 2 things:

The nodemon package that triggers a restart when a file changes

A mounted volume that mirrors your host disk in the container

Installing nodemon is easy enough, but since it needs to be globally installed inside the container we’ll add it to our Dockerfile.

For nodemon to work inside a docker container, it needs to run in legacy mode, which can be achieved by using the -L flag.

$ nodemon -L index.js

We’ll save this command in your package.json under the name dev

"scripts": {

"start": "node index.js",

"dev": "nodemon -L index.js",

}

Next up we need to make sure the changes to files on your system are reflected inside the container. This can be achieved by mounting your folder. We’ll configure this inside our docker-compose.yml file later on like this:

volumes:

- .:/app

We’re mounting our root directory in a directory called app, and in our dockerfile we’ll assign this app folder as our working directory. When our docker container is running, any changes in our host directory will be reflected in the container directory, which will trigger nodemon.

Next up, we’ll copy all the rest of our files to the container using COPY . ./app

Here’s what your Dockerfile should look like:

How to connect NodeJS to our Ethereum Network

The true power of docker compose shows itself when we are using multiple containers at the same time. Our second container is an Ethereum node.

The container will be an Ubuntu image running a geth node. The image we’re using is called makevoid/ethereum-geth-dev. This image will start up a personal Ethereum node running a fresh blockchain, and one of the advantages of this particular image is that it will only mine for transactions when new TX’s present themselves.

We just add the following lines to our docker-compose.yml file:

geth:

image: "makevoid/ethereum-geth-dev"

ports:

- 30303:30303

- 8545:8545

The Ethereum service is called geth and we’ll call our server web. We’ll only start running our webserver once geth is up, and we can define this using the depends_on configuration. Bringing all of the previous pieces together, and adding the right ports for our services, our YAML file should look like this:

Docker compose will automatically run our containers in a private network, where the hostnames are identical to the name of the services.

Let’s spin up a very basic express server, and show the coinbase (the address where the mining rewards go to). Since geth will be running inside a container called geth, the hostname will reflect this so we want the web3 library to connect to http://geth:8545

Now everything is set up — try running the following commands:

$ docker-compose build

$ docker-compose up // If you want to kill your running containers, use $ docker-compose kill

Navigate to http://localhost:3001 and you should see your coinbase address.

Changing your index.js file should trigger nodemon and start a rebuild. For more information on configuring nodemon, check out the docs.

Congratulations! You now have a pretty solid base to start developing an application that interacts with your own private blockchain, batteries and webserver included.

In my next post I’ll run through interacting with the blockchain, creating transactions / running smart contracts and deploying all of it to AWS.

If you’d like to see this post in action, feel free to download the source files!

or take a look at the live application on icoshift.com