Docker for an Existing Rails Application

A few weeks ago I decided to take an existing Ruby on Rails application and configure it to work with Docker. I wanted a container-based development environment that fed into a continuous integration to continuous deployment pipeline. The hope is that this style of development would eliminate the differences in environment you typically find between work laptops running OS X and staging, testing, and production servers running Linux. I also wanted to simplify server configuration and maintenance, plus make it super easy for another person to jump on the project. Docker delivers on all these hopes, but with so many different ways to use Docker it’s hard to craft a good setup. I wanted an optimal setup that allows me to work with Rails the way that I’m used to as well as deliver on the promise of containerization. This article will discuss how to make that happen, and is the first of three I plan to write on spinning up Docker-based development for a CI to CD pipeline.

Note to the reader

For your convenience, all of the code in this article is available online at GitHub. The article assumes you’re comfortable with Rails, Docker, Docker Compose, and the command line. It will not dive into Docker basics. There are tons of articles written on that subject. If you’ve never worked with Docker then I suggest you begin by following the excellent getting started guide and browsing the docs. Once you’ve gained an understanding of Docker terminology and tools then come back here to learn how to make your existing Rails application multi-container and production deployable. Finally, the application used in this article is named “docker_example”. You should change that to your application’s real name throughout the examples.

The stack and architecture

I run my Rails apps with a pretty standard stack: Nginx at the front for serving static assets and simple load balancing, Unicorn in the middle for application processing, and Postgres in the back for data storage. Docker, more specifically Docker Compose, is used to tie these three services together by orchestrating their communication and creating a multi-container, deployable application. By multi-container I mean that each service runs in its own container and communicates with other services via TCP/IP. This is more complex than if I created one container that runs all three services. A multi-container architecture, however, provides a better separation of concerns and adheres to the UNIX philosophy of “every tool (container) should do one thing, and do it well.” Ultimately, a multi-container architecture makes it easy to replace one service with another simply by swapping containers.

Step 1: dockerize your Rails app

First we need to tell Docker how to build the image we want to run our Rails app. To do that create a “Dockerfile” at the root of your Rails application with the following contents:

Dockerfile # Base our image on an official, minimal image of our preferred Ruby FROM ruby:2.2.3-slim # Install essential Linux packages RUN apt-get update -qq && apt-get install -y build-essential libpq-dev postgresql-client # Define where our application will live inside the image ENV RAILS_ROOT /var/www/docker_example # Create application home. App server will need the pids dir so just create everything in one shot RUN mkdir -p $RAILS_ROOT/tmp/pids # Set our working directory inside the image WORKDIR $RAILS_ROOT # Use the Gemfiles as Docker cache markers. Always bundle before copying app src. # (the src likely changed and we don't want to invalidate Docker's cache too early) # http://ilikestuffblog.com/2014/01/06/how-to-skip-bundle-install-when-deploying-a-rails-app-to-docker/ COPY Gemfile Gemfile COPY Gemfile.lock Gemfile.lock # Prevent bundler warnings; ensure that the bundler version executed is >= that which created Gemfile.lock RUN gem install bundler # Finish establishing our Ruby enviornment RUN bundle install # Copy the Rails application into place COPY . . # Define the script we want run once the container boots # Use the "exec" form of CMD so our script shuts down gracefully on SIGTERM (i.e. `docker stop`) CMD [ "config/containers/app_cmd.sh" ] 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 # Base our image on an official, minimal image of our preferred Ruby FROM ruby : 2.2.3 - slim # Install essential Linux packages RUN apt - get update - qq && apt - get install - y build - essential libpq - dev postgresql - client # Define where our application will live inside the image ENV RAILS_ROOT / var / www / docker_example # Create application home. App server will need the pids dir so just create everything in one shot RUN mkdir - p $RAILS_ROOT / tmp / pids # Set our working directory inside the image WORKDIR $RAILS_ROOT # Use the Gemfiles as Docker cache markers. Always bundle before copying app src. # (the src likely changed and we don't want to invalidate Docker's cache too early) # http://ilikestuffblog.com/2014/01/06/how-to-skip-bundle-install-when-deploying-a-rails-app-to-docker/ COPY Gemfile Gemfile COPY Gemfile .lock Gemfile .lock # Prevent bundler warnings; ensure that the bundler version executed is >= that which created Gemfile.lock RUN gem install bundler # Finish establishing our Ruby enviornment RUN bundle install # Copy the Rails application into place COPY . . # Define the script we want run once the container boots # Use the "exec" form of CMD so our script shuts down gracefully on SIGTERM (i.e. `docker stop`) CMD [ "config/containers/app_cmd.sh" ]

There are certain files and directories we don’t want copied over with the COPY . . command. We can ignore them with a .dockerignore at the root of the Rails app. Add to it the following:

.dockerignore .git .env .dockerignore 1 2 3 .git .env .dockerignore

Step 2: configure the application server

The last line of our Dockerfile references a script that does not yet exist. We need to create it, so go ahead and create a directory under config/ named “containers” and paste the following into a script named “app_cmd.sh”:

app_cmd.sh #!/usr/bin/env bash # Prefix `bundle` with `exec` so unicorn shuts down gracefully on SIGTERM (i.e. `docker stop`) exec bundle exec unicorn -c config/containers/unicorn.rb -E $RAILS_ENV; 1 2 3 4 #!/usr/bin/env bash # Prefix `bundle` with `exec` so unicorn shuts down gracefully on SIGTERM (i.e. `docker stop`) exec bundle exec unicorn - c config / containers / unicorn .rb - E $RAILS_ENV ;

Make sure it is executable, so run chmod 775 on it for good measure. Also make sure you’ve included the unicorn gem in your Gemfile.

The script contains the command we want Docker to run when it initializes a container from our image. It will start the Unicorn server that will process our source code. We put the command in a script because we want the $RAILS_ENV environment variable honored at runtime (i.e. when the container starts). This will not work if we put the command directly in our Dockerfile. That’s because Docker strips out environment variables from the build host in order to keep builds consistent across all the different platforms it supports.

Our Unicorn server is configured with the -c option, which references a file we have to create. Open a file named config/containers/unicorn.rb and add the following:

unicorn.rb # Where our application lives. $RAILS_ROOT is defined in our Dockerfile. app_path = ENV['RAILS_ROOT'] # Set the server's working directory working_directory app_path # Define where Unicorn should write its PID file pid "#{app_path}/tmp/pids/unicorn.pid" # Bind Unicorn to the container's default route, at port 3000 listen "0.0.0.0:3000" # Define where Unicorn should write its log files stderr_path "#{app_path}/log/unicorn.stderr.log" stdout_path "#{app_path}/log/unicorn.stdout.log" # Define the number of workers Unicorn should spin up. # A new Rails app just needs one. You would scale this # higher in the future once your app starts getting traffic. # See https://unicorn.bogomips.org/TUNING.html worker_processes 1 # Make sure we use the correct Gemfile on restarts before_exec do |server| ENV['BUNDLE_GEMFILE'] = "#{app_path}/Gemfile" end # Speeds up your workers. # See https://unicorn.bogomips.org/TUNING.html preload_app true # # Below we define how our workers should be spun up. # See https://unicorn.bogomips.org/Unicorn/Configurator.html # before_fork do |server, worker| # the following is highly recomended for Rails + "preload_app true" # as there's no need for the master process to hold a connection if defined?(ActiveRecord::Base) ActiveRecord::Base.connection.disconnect! end # Before forking, kill the master process that belongs to the .oldbin PID. # This enables 0 downtime deploys. old_pid = "#{server.config[:pid]}.oldbin" if File.exists?(old_pid) && server.pid != old_pid begin Process.kill("QUIT", File.read(old_pid).to_i) rescue Errno::ENOENT, Errno::ESRCH # someone else did our job for us end end end after_fork do |server, worker| if defined?(ActiveRecord::Base) ActiveRecord::Base.establish_connection end end 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 # Where our application lives. $RAILS_ROOT is defined in our Dockerfile. app_path = ENV [ 'RAILS_ROOT' ] # Set the server's working directory working _ directory app_path # Define where Unicorn should write its PID file pid "#{app_path}/tmp/pids/unicorn.pid" # Bind Unicorn to the container's default route, at port 3000 listen "0.0.0.0:3000" # Define where Unicorn should write its log files stderr _ path "#{app_path}/log/unicorn.stderr.log" stdout _ path "#{app_path}/log/unicorn.stdout.log" # Define the number of workers Unicorn should spin up. # A new Rails app just needs one. You would scale this # higher in the future once your app starts getting traffic. # See https://unicorn.bogomips.org/TUNING.html worker _ processes 1 # Make sure we use the correct Gemfile on restarts before _ exec do | server | ENV [ 'BUNDLE_GEMFILE' ] = "#{app_path}/Gemfile" end # Speeds up your workers. # See https://unicorn.bogomips.org/TUNING.html preload _ app true # # Below we define how our workers should be spun up. # See https://unicorn.bogomips.org/Unicorn/Configurator.html # before _ fork do | server , worker | # the following is highly recomended for Rails + "preload_app true" # as there's no need for the master process to hold a connection if defined ? ( ActiveRecord :: Base ) ActiveRecord :: Base . connection . disconnect ! end # Before forking, kill the master process that belongs to the .oldbin PID. # This enables 0 downtime deploys. old_pid = "#{server.config[:pid]}.oldbin" if File . exists ? ( old_pid ) && server . pid != old _ pid begin Process . kill ( "QUIT" , File . read ( old_pid ) . to_i ) rescue Errno :: ENOENT , Errno :: ESRCH # someone else did our job for us end end end after _ fork do | server , worker | if defined ? ( ActiveRecord :: Base ) ActiveRecord :: Base . establish _ connection end end

At this point you should be able to run docker build -t dockerexample_app . and successfully build the Rails application Docker image. It won’t run because we still have to setup the database but you should see the image with docker images .

Step 3: introduce Docker Compose

Since our application will be running across multiple containers it would be nice to control them all as one. That is what Docker Compose does for us. To get our app started with Docker Compose create a file docker-compose.yml in the root of your Rails app with the following contents:

docker-compose.yml # service configuration for our dockerized Rails app app: # use the Dockerfile next to this file build: . # sources environment variable configuration for our app env_file: .env # rely on the RAILS_ENV value of the host machine environment: RAILS_ENV: $RAILS_ENV # makes the app container aware of the DB container links: - db # expose the port we configured Unicorn to bind to ports: - "3000:3000" 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 # service configuration for our dockerized Rails app app : # use the Dockerfile next to this file build : . # sources environment variable configuration for our app env_file : .env # rely on the RAILS_ENV value of the host machine environment : RAILS_ENV : $RAILS_ENV # makes the app container aware of the DB container links : - db # expose the port we configured Unicorn to bind to ports : - "3000:3000"

In order to build from this config you’ll need to define RAILS_ENV on whatever host you plan to build on. Without it Compose will default the value to a blank string and give you a warning. You’ll also need to create a .env file at the root of your Rails app. In .env you can define environment variables and use them to configure your application. For example:

.env SECRET_KEY_BASE=06c08ac1dca74d9b20eb7bf46ba2646a9ed058f607b32d0a6df3a7c5fa9048f0318e521a3a7cfd90a2872fbfdaf9502ad1217805b608a3ec9bebddad0d56a4ab MONEY_API_KEY=ed3daa1f192f656d37c504675bbd7b20 MONEY_API_PASSWORD=topsecret 1 2 3 SECRET_KEY_BASE = 06c08ac1dca74d9b20eb7bf46ba2646a9ed058f607b32d0a6df3a7c5fa9048f0318e521a3a7cfd90a2872fbfdaf9502ad1217805b608a3ec9bebddad0d56a4ab MONEY_API_KEY = ed3daa1f192f656d37c504675bbd7b20 MONEY_API_PASSWORD = topsecret

It is strongly recommended that you .gitignore your .env file so that it doesn’t end up in your source control. If you don’t plan to use .env then you can get rid of the env_file: line.

Step 4: containerize your database

Under the links: section of our docker-compose.yml we reference a container named “db”. This will be the container we use to run our Postgres database. Creating that container is dead simple. Just add this to your docker-compose.yml:

docker-compose.yml # service configuration for our database db: # use the preferred version of the official Postgres image # see https://hub.docker.com/_/postgres/ image: postgres:9.4.5 # persist the database between containers by storing it in a volume volumes: - docker-example-postgres:/var/lib/postgresql/data 1 2 3 4 5 6 7 8 9 10 # service configuration for our database db : # use the preferred version of the official Postgres image # see https://hub.docker.com/_/postgres/ image : postgres :9.4.5 # persist the database between containers by storing it in a volume volumes : - docker-example-postgres :/var/lib/postgresql/data

Second, you’ll need to update your database.yml to be similar to this:

database.yml default: &default adapter: postgresql encoding: unicode host: db port: 5432 username: postgres password: <%= ENV['POSTGRES_PASSWORD'] %> development: <<: *default database: your_dev_db_name #CHANGE ME test: <<: *default database: your_test_db_name #CHANGE ME production: <<: *default database: your_production_db_name #CHANGE ME 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 default : &default adapter : postgresql encoding : unicode host : db port : 5432 username : postgres password : < % = ENV ['POSTGRES_PASSWORD'] % > development : < < : *default database : your_dev_db_name #CHANGE ME test : < < : *default database : your_test_db_name #CHANGE ME production : < < : *default database : your_production_db_name #CHANGE ME

ENV[‘POSTGRES_PASSWORD’] is blank by default and will work out of the box. As mentioned in the Postgres image docs you can assign a value to POSTGRES_PASSWORD and it will be honored by the Postgres container as the password of the postgres user. The environment: and env_file: Docker Compose directives are useful for setting this value from the host.

Now run docker-compose build from the root of your app to create your application and database images. Once built you can initialize your development DB with docker-compose run app rake db:create and then populate it whatever way you see fit ( docker-compose run app rake db:schema:load db:seed or docker-compose run app rake db:migrate db:seed . You can even import a Postgres dump but the details of that are beyond the scope of this article). Now we can finally run the application with docker-compose up -d . To verify the containers are up execute docker ps . You should see output similar to this:

docker ps > docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES bd0c625513dc dockerexample_app "config/containers/ap" 38 minutes ago Up 3 seconds 0.0.0.0:3000->3000/tcp dockerexample_app_1 77c2c3743864 postgres:9.4.5 "/docker-entrypoint.s" 38 minutes ago Up 3 seconds 5432/tcp dockerexample_db_1 1 2 3 4 > docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES bd0c625513dc dockerexample _ app "config/containers/ap" 38 minutes ago Up 3 seconds 0.0.0.0 : 3000 -> 3000 / tcp dockerexample_app _ 1 77c2c3743864 postgres : 9.4.5 "/docker-entrypoint.s" 38 minutes ago Up 3 seconds 5432 / tcp dockerexample_db_1

To test your app in the browser navigate to $DOCKER_HOST:3000. DOCKER_HOST should be the IP your Docker daemon is running on. I installed Docker with the OS X version of Docker Toolbox via Homebrew, and out of the box my Docker daemon is bound to IP 192.168.99.100.

Once you see your app I suggest you quit the containers with docker-compose stop before continuing.

Step 5: proxy your web requests

Remember, we want our development environment to be the same as the one we deploy to production. That means we need a reverse proxy, in our case the Nginx web server, to proxy requests to Unicorn. This is required for production because Unicorn is designed to be used with fast clients, like a UNIX socket or local port, not a slow client like a web browser. Also, since Unicorn is an application server, we want it doing what it does best, which is crunching and serving our application code. We don’t want it serving static assets (i.e. .js, .css, .png, etc. files that never change). Nginx is great at serving static assets so we want it do that job.

In order to get Nginx into the mix we need another Docker container. To get started add the this to your docker-compose.yml:

docker-compose.yml # service configuration for our web server web: # set the build context to the root of the Rails app build: . # build with a different Dockerfile dockerfile: config/containers/Dockerfile-nginx # makes the web container aware of the app container links: - app # expose the port we configured Nginx to bind to ports: - "80:80" 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 # service configuration for our web server web : # set the build context to the root of the Rails app build : . # build with a different Dockerfile dockerfile : config/containers/Dockerfile-nginx # makes the web container aware of the app container links : - app # expose the port we configured Nginx to bind to ports : - "80:80"

We’ll also want to tweak the app configuration. Change this line:

docker-compose.yml ports: - "3000:3000" 1 2 ports : - "3000:3000"

to this:

docker-compose.yml expose: - "3000" 1 2 expose : - "3000"

That makes it so our Unicorn port is no longer open on the host machine. Instead it will only be available to other running Docker containers (e.g. the web container we are creating). This is more secure because it reduces the number of ports your application needs open in production. By the time we are finished the only port our application will expose to the outside world is port 80 for web requests.

Notice that our Compose web configuration references a build file that does not yet exist (config/containers/Dockerfile-nginx). Go ahead and create that file now with these contents:

Dockerfile-nginx # build from the official Nginx image FROM nginx # install essential Linux packages RUN apt-get update -qq && apt-get -y install apache2-utils # establish where Nginx should look for files ENV RAILS_ROOT /var/www/docker_example # Set our working directory inside the image WORKDIR $RAILS_ROOT # create log directory RUN mkdir log # copy over static assets COPY public public/ # copy our Nginx config template COPY config/containers/nginx.conf /tmp/docker_example.nginx # substitute variable references in the Nginx config template for real values from the environment # put the final config in its place RUN envsubst '$RAILS_ROOT' < /tmp/docker_example.nginx > /etc/nginx/conf.d/default.conf # Use the "exec" form of CMD so Nginx shuts down gracefully on SIGTERM (i.e. `docker stop`) CMD [ "nginx", "-g", "daemon off;" ] 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 # build from the official Nginx image FROM nginx # install essential Linux packages RUN apt - get update - qq && apt - get - y install apache2 - utils # establish where Nginx should look for files ENV RAILS_ROOT / var / www / docker_example # Set our working directory inside the image WORKDIR $RAILS_ROOT # create log directory RUN mkdir log # copy over static assets COPY public public / # copy our Nginx config template COPY config / containers / nginx .conf / tmp / docker_example .nginx # substitute variable references in the Nginx config template for real values from the environment # put the final config in its place RUN envsubst '$RAILS_ROOT' < / tmp / docker_example .nginx > / etc / nginx / conf .d / default .conf # Use the "exec" form of CMD so Nginx shuts down gracefully on SIGTERM (i.e. `docker stop`) CMD [ "nginx" , "-g" , "daemon off;" ]

Our build file, and Nginx itself, relies on a configuration file that we need to create. Open config/containers/nginx.conf in your text editor and add the following:

nginx.conf # This is a template. Referenced variables (e.g. $RAILS_ROOT) need # to be rewritten with real values in order for this file to work. # To learn about all the directives used here, and more, see # http://nginx.org/en/docs/dirindex.html # define our application server upstream unicorn { server app:3000; } server { # define our domain; CHANGE ME server_name yourproductiondomain.com; # define the public application root root $RAILS_ROOT/public; index index.html; # define where Nginx should write its logs access_log $RAILS_ROOT/log/nginx.access.log; error_log $RAILS_ROOT/log/nginx.error.log; # deny requests for files that should never be accessed location ~ /\. { deny all; } location ~* ^.+\.(rb|log)$ { deny all; } # serve static (compiled) assets directly if they exist (for rails production) location ~ ^/(assets|images|javascripts|stylesheets|swfs|system)/ { try_files $uri @rails; access_log off; gzip_static on; # to serve pre-gzipped version expires max; add_header Cache-Control public; # Some browsers still send conditional-GET requests if there's a # Last-Modified header or an ETag header even if they haven't # reached the expiry date sent in the Expires header. add_header Last-Modified ""; add_header ETag ""; break; } # send non-static file requests to the app server location / { try_files $uri @rails; } location @rails { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; proxy_pass http://unicorn; } } 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 # This is a template. Referenced variables (e.g. $RAILS_ROOT) need # to be rewritten with real values in order for this file to work. # To learn about all the directives used here, and more, see # http://nginx.org/en/docs/dirindex.html # define our application server upstream unicorn { server app : 3000 ; } server { # define our domain; CHANGE ME server_name yourproductiondomain .com ; # define the public application root root $RAILS_ROOT / public ; index index .html ; # define where Nginx should write its logs access _ log $RAILS_ROOT / log / nginx .access .log ; error _ log $RAILS_ROOT / log / nginx .error .log ; # deny requests for files that should never be accessed location ~ / \ . { deny all ; } location ~ * ^ . + \ . ( rb | log ) $ { deny all ; } # serve static (compiled) assets directly if they exist (for rails production) location ~ ^ / ( assets | images | javascripts | stylesheets | swfs | system ) / { try _ files $uri @ rails ; access_log off ; gzip_static on ; # to serve pre-gzipped version expires max ; add_header Cache - Control public ; # Some browsers still send conditional-GET requests if there's a # Last-Modified header or an ETag header even if they haven't # reached the expiry date sent in the Expires header. add_header Last - Modified "" ; add_header ETag "" ; break ; } # send non-static file requests to the app server location / { try _ files $uri @ rails ; } location @ rails { proxy_set _ header X - Real - IP $remote_addr ; proxy_set _ header X - Forwarded - For $proxy_add_x_forwarded_for ; proxy_set_header Host $http_host ; proxy_redirect off ; proxy_pass http : / / unicorn ; } }

At this point you should be able to build all containers with docker-compose build , and then run everything with docker-compose up -d . To verify that all three containers are up and running execute docker ps . You should see output like this:

docker ps > docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES d867759237a6 dockerexample_web "nginx -g 'daemon off" 9 minutes ago Up 2 seconds 0.0.0.0:80->80/tcp, 443/tcp dockerexample_web_1 333df34dc609 dockerexample_app "config/containers/ap" 9 minutes ago Up 2 seconds 3000/tcp dockerexample_app_1 345516bac081 postgres:9.4.5 "/docker-entrypoint.s" 11 minutes ago Up 2 seconds 5432/tcp dockerexample_db_1 1 2 3 4 5 > docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES d867759237a6 dockerexample _ web "nginx -g 'daemon off" 9 minutes ago Up 2 seconds 0.0.0.0 : 80 -> 80 / tcp , 443 / tcp dockerexample_web _ 1 333df34dc609 dockerexample _ app "config/containers/ap" 9 minutes ago Up 2 seconds 3000 / tcp dockerexample_app _ 1 345516bac081 postgres : 9.4.5 "/docker-entrypoint.s" 11 minutes ago Up 2 seconds 5432 / tcp dockerexample_db_1

Final test: browse to $DOCKER_HOST and you should see your app.

Working with Docker containers in development

If you made it this far I assume you’re running your Rails application in a multi-container Docker environment that is fit for development, test, and production. Congratulations! If you were to develop with this setup, however, I think you’d quickly find that it’s a pain in the ass. That’s because every change you make to your code would require you to rebuild your image and restart your containers to see the change. That’s not how we develop with Rails. We’re used to making a change and refreshing the page to see it in action. Fortunately we can achieve the same thing with Docker by adding a new file, docker-compose.override.yml, to the root of our application with the following:

docker-compose.override.yml app: # map our application source code, in full, to the application root of our container volumes: - .:/var/www/docker_example web: # use whatever volumes are configured for the app container volumes_from: - app 1 2 3 4 5 6 7 8 9 10 11 app : # map our application source code, in full, to the application root of our container volumes : - . :/var/www/docker_example web : # use whatever volumes are configured for the app container volumes_from : - app

Docker Compose automatically looks for this file and applies it on top of our docker-compose.yml configuration. That is, configuration in docker-compose.override.yml will supplement or override configuration in docker-compose.yml. This makes it very convenient for making environment-specific modifications to our Docker builds and resulting containers. To try it out run docker-compose stop && docker-compose up -d to restart your containers, make a visual change to your code as usual, and refresh the containerized app in your browser to see the change. Lastly, I like to .gitignore docker-compose.override.yml to make sure that it isn’t deployed and used elsewhere. This helps ensure only fully containerized apps run outside my dev machine.

Running tests, rake tasks, and consoles

I find it easiest to always have a shell open inside my container to work with Rails like I’m used to. To get that going run docker exec -it dockerexample_app_1 /bin/bash . That gives you a minimal shell to work in. From there you can rspec spec , rake some:task , rails c , rails db , etc. Since it’s a minimal shell you won’t have all the command line goodness you’re probably used to. You can tweak the shell, however, by building into the application image anything you need. In particular you can apt-get new packages and/or COPY over shell configurations in your Dockerfile. Just be aware that doing so will increase the size of your built image, and since these containers are meant to run in production you won’t want a full-blown dev playground.

Conclusion

With the right Docker setup a software development team can get new members up and running faster than ever before. They can also ensure a consistent environment no matter where the application is run. This goes a long way in reducing time spent on bugs that result from variations between development, test, and production environments. Lastly, Rails plays nicely with Docker. With the right Docker (Compose) configuration you can easily work with the same Ruby/Rails tools you always have, and hardly change the way you already write code.

Stay tuned to this blog, or follow me on Twitter, to be aware of follow-up articles in which I’ll discuss using the Docker setup outlined here in both continuous integration and continuous deployment environments.

Got questions or feedback? I want it. Drop your thoughts in the comments below or hit me @ccstump.

Thanks for reading!

Addendum

5/9/16 Be secure! Now that your Rails app is running in a Docker container lock it down with HTTPS for free using Let’s Encrypt.

3/17/16 I’ve published the last article of the series. Read it to learn how to use your dockerized Rails application with continuous deploy.

3/3/16 Ready for more? See the second article in this series to learn how to run your freshly dockerized Rails app in continuous integration using CircleCI.

3/2/16 My use of the .env file in this article has raised questions and some confusion. Thankfully Ryan Nickel wrote a post explaining all the ways to use environment variables with Docker, and why you should consider their use for your app. Definitely worth a read.