Learning Rails – Deploying Rails (Part 8)

Thu Jan 5, 2017 - 1500 Words

Today we’re going to take a big step with meal planning by deploying the application. Since our application is already built on top of Docker we will be using that as our means of deployment.

Goal for this Tutorial:

  • Create production docker-compose file
  • Provision Docker host in the cloud
  • Deploy our meal planner

Our meal planning application isn’t exactly feature complete, but it’s in a good enough spot that we might want to get it out there for some people to beta test for us. For that to happen we need to put it onto a server that is publicly accessible. Heroku is a popular deployment target for Ruby on Rails applications because it is very easy to work with and requires very little knowledge from an operations standpoint, but since we’re already familiar with Docker we’re going to use that and deploy to our own server.

Note: It is possible to use Docker to deploy to Heroku also, but I will leave that as an exercise for the reader if interested.

Prepping the Application for Production

Before we can actually deploy we need to make sure that our application can run in the production environment. Thankfully, we can run test deploys on our local docker machine before we ever provision a server and be relatively certain that it will work in a production environment. We’ll start by creating a production specific docker-compose configuration that we’ll use for deployment only. We’ll start by copying our development configuration and then make some modifications:

$ cp docker-compose.yml docker-compose.prod.yml

We only need to change a few lines. We can’t mount a local directory as a volume on a remote server so we need to remove that section. Additionally, we will want to use a separate list of environment variables in production than we use in development so we’ll specify a different env_file. Lastly, we’ll ensure that the RAILS_ENV is set to production:

version: "2"

volumes:
  db-data:
    external: false

services:
  prod_db:
    image: postgres
    env_file: .env.prod
    volumes:
      - db-data:/var/lib/postgresql/db-data

  prod_app:
    build: .
    env_file: .env.prod
    ports:
      - "3000:3000"
    depends_on:
      - prod_db

We can create the .env.prod file by copying our original .env file and changing the username/password:

$ cp .env .env.prod

We’ll also want to generate a new password, and we can use ruby to do that:

$ docker-compose run --rm app ruby -r securerandom -e "puts SecureRandom.base64(30)"

Copy the output of that command and use it in the new environment file, and also assign the POSTGRES_HOST value to prod_db for the time being.

.env.prod

POSTGRES_USER=meal_planner
POSTGRES_PASSWORD=exh2uRzCrbjfRNj4GpiaZuToJa6QrFHCdmDbKLNe
POSTGRES_HOST=prod_db

We’re not quite finished though because we will need to set our SECRET_KEY_BASE value and set the RAILS_ENV to production. This is automatically set in development and test, but should be passed in as an environment variable in other application environments. We can generate that using rake secret

$ docker-compose run --rm app rake secret

.env.prod

POSTGRES_USER=meal_planner
POSTGRES_PASSWORD=exh2uRzCrbjfRNj4GpiaZuToJa6QrFHCdmDbKLNe
POSTGRES_HOST=prod_db
RAILS_ENV=production
SECRET_KEY_BASE=afaad8af5acc8841359153f01f970827e815def7e71d50fd5213bb8e2fcee4cc0c16d61bf30ec3a435d4849e1c4e666c653c5ddd6e407f95aa2fab938b8965b8

Testing Production Locally

Now that the application has been configured with the environment variables that we need for production we can test this locally by creating the containers and running this the same way we normally would. A big difference here is that if we make any file changes we will need to rebuild the prod_app image. First, we need to create our new database.

$ docker-compose -f docker-compose.prod.yml build prod_app
... After the image is built
$ docker-compose -f docker-compose.prod.yml run --rm prod_app rake db:create
$ docker-compose -f docker-compose.prod.yml run --rm prod_app rake db:migrate

That should complete without any errors and we’ll now have a database. Next, we’ll start the application and make sure that we can visit the homepage. You’ll need to make sure that your containers for development are no longer running before doing this because we will be exposing the same port.

$ docker-compose -f docker-compose.prod.yml up -d

Visiting http://localhost:3000/ should show you our welcome page, but you’ll notice that it doesn’t have any of our styles. This is happening because our production rails configuration is set to not serve static assets. Serving these assets is the job of a web server like nginx or apache and if we don’t have to have our ruby process do that then we won’t. For the time being though we’re going to set this up to be configurable with an environment variable so we can toggle this on an off as we need it.

Configuring Asset Handling

As of Rails 5 there is already an environment variable that can be used to determine if the rails application should serve up assets in the /public directory. We will modify the config/environments/production.rb file so that also use that variable to optionally handle asset compilation if it’s necessary.

config/environments/production.rb

Rails.application.configure do
  # ... additional configuration left out for brevity

  # Compress JavaScripts and CSS.
  config.assets.js_compressor = :uglifier
  config.assets.css_compressor = :sass

  # Do not fallback to assets pipeline if a precompiled asset is missed.
  config.assets.compile = ENV['RAILS_SERVE_STATIC_FILES'].present?

  # ... additional configuration left out for brevity
end

When we set the RAILS_SERVE_STATIC_FILES value to true in our environment file it will set our application to serve up and compile assets on the fly.

.env.prod

POSTGRES_USER=meal_planner
POSTGRES_PASSWORD=exh2uRzCrbjfRNj4GpiaZuToJa6QrFHCdmDbKLNe
POSTGRES_HOST=prod_db
RAILS_ENV=production
SECRET_KEY_BASE=afaad8af5acc8841359153f01f970827e815def7e71d50fd5213bb8e2fcee4cc0c16d61bf30ec3a435d4849e1c4e666c653c5ddd6e407f95aa2fab938b8965b8

Since we’ve made a change to one of the ruby files we will need to rebuild the application before we can re-run it to make sure that our efforts were successful.

$ docker-compose -f docker-compose.prod.yml stop prod_app
$ docker-compose -f docker-compose.prod.yml build prod_app
$ docker-compose -f docker-compose.prod.yml up -d prod_app

When you try to reload the application in browser you might not see anything, and if you check the logs using docker logs mealplan_prod_app_1 you might see something like the following:

rm: cannot remove '/tmp/puma.pid': No such file or directory

This is caused by an issue in script/start and we can fix that pretty easily:

#!/bin/sh

if [[ -a /tmp/puma.pid ]]; then
  rm /tmp/puma.pid
fi

rails server -b 0.0.0.0 -P /tmp/puma.pid

Now we will only delete the pid file if it already exists. If you rerun the docker stop, build, and up commands again and reload the web page it will take a little longer than expected, but then render with the styles. That delay is from the assets being compiled on the first load, and subsequent page loads should be fast.

Creating our Docker Host in Digital Ocean

Docker provides us with a really nice tool for building docker hosts easily in quite a few hosting providers. For this tutorial, we’re going to host with Digital Ocean (that link will get you a $10 credit for signing up) and creating our host using docker-machine. You’ll need to sign up with digital ocean before we begin because you’ll need to grab your API token. Once you have your account you can go here to generate a new API token.

I have my token store in the environment variable DO_TOKEN (you can set that for yourself using export DO_TOKEN="YOUR_TOKEN". Now that we have our API token, we can use docker-machine to actually create a droplet for us and set it up to be a docker host for us.

$ docker-machine create --driver=digitalocean --digitalocean-access-token=$DO_TOKEN --digitalocean-size=1gb meal-planner

Once that command has finished, it’ll make our life a little easier if we designate one terminal window/tab specifically to interacting with that docker host. From that window/tab run the following command to set docker to interact with that server instead of your local docker:

eval $(docker-machine env meal-planner)

Deploying the Application

Our terminal window is now set to connect to the proper server and we can go through the same steps that we went through on our local machine to get the application running in the cloud. This new docker host doesn’t have any images downloaded so it will take a little while the first time you run these commands.

$ docker-compose -f docker-compose.prod.yml build prod_app
... After the image is built
$ docker-compose -f docker-compose.prod.yml run --rm prod_app rake db:create db:migrate

Once our database is set up all that is left to do is run the application:

$ docker-compose -f docker-compose.prod.yml up -d

Now we can find the docker host’s IP address and view our application running in the cloud. To get the IP address you can use the following command:

$ docker-machine ip meal-planner
104.236.250.214

In this case, I can go to http://104.236.250.214:3000/ to view the application.

Recap

We’ve take our meal planning application from being nothing more than an ideal to have a lot of the functionality flushed out and being deployed publicly. This is the first iteration of an application like this, but you can take it quite a bit further and improve it in any number of ways. For our purposes though, this application is finished. Next week will improve the deployment of the application by layering Nginx into the mix as the web server, and that will conclude the introduction to Rails series.