Previously, I detailed how to deploy REST-based APIs using Ruby and Sinatra. Let's now look at a similar approach using Node Express. We'll learn how to build the API using Node Express 4, understand how to try it out locally using Docker Compose, and deploy your Dockerized API to your choice of cloud vendor using Cloud 66.

Building a REST API with Express 4

We'll build our REST API using Express 4, a popular Node-based framework that offers flexible routing options to allow us to customize our API endpoints to fit our needs. There are also other Node-based frameworks available, such as Hapi.js and restify, you may wish to explore as well.

For this walkthrough, we'll further build on our previous Ruby-based Products API with an Inventory API that tracks the current onhand quantities for our products. It will use MongoDB as our data store, though you can choose your datastore of choice. We'll also use mongoose for our document mapper between Node and MongoDB.

For those more familiar with web API design, I've chosen to use the HAL hypermedia format for response payloads. To generate these easily, I'm using the halson module.

For discussion purposes, I'll split the sections up. You can view the full source code for the product on my github product if you prefer to view all the code at once. Feel free to fork the product and try it out for yourself, add new endpoints, or use it as a starting point for your own idea.

Setup Packages and the Database

First, we need to setup the required packages we'll need for Express, along with some basic configuration and database connectivity:

// server.js var express = require('express'); // call express var app = express(); // define our app using express var bodyParser = require('body-parser'); // HAL support var halson = require('halson'); // configure app to use bodyParser() // this will let us get the data from a POST app.use(bodyParser.urlencoded({ extended: true })); app.use(bodyParser.json()); // set our port, defaulting if nothing is specified in the env var port = process.env.PORT || 8080; // load app configurations from config.js var config = require('./config'); // configure our connection to MongoDB var mongoose = require('mongoose'); // establish our MongoDB connection for the models mongoose.connect(config.db[app.settings.env]);

Define the model

We then define our Mongoose model. We'll call it ProductQuantity, which will represent the onhand quantity of our product inventory:

// app/models/product_quantity.js var mongoose = require('mongoose'); var Schema = mongoose.Schema; var ProductQuantitySchema = new Schema({ product_id: String, quantity_onhand: Number }, { timestamps: { createdAt: 'created_at', updatedAt: 'updated_at' } }); module.exports = mongoose.model('ProductQuantity', ProductQuantitySchema);

And then we need to require our model from server.js:

// import models var ProductQuantity = require('./app/models/product_quantity');

Now, on to the fun part: implementing our API.

Implementing the API

The first step is to obtain a handle to the Express Router, allowing us to register the endpoints:

// server.js // get an instance of the express Router, allowing us to add // middleware and register our API routes as needed var router = express.Router();

The next step in implementing our API in Express is to create a PUT endpoint to allow us to create or update the onhand_quantity of a product by ID:

// server.js // create/update a productQuantity router.put('/product_quantities/:product_id', function(req, res) { if (req.body.quantity_onhand == null) { res.status(400); res.setHeader('Content-Type', 'application/vnd.error+json'); res.json({ message: "quantity_onhand parameter is required"}); } else { ProductQuantity.findOne({ product_id: req.params.product_id}, function(err, productQuantity) { if (err) return console.error(err); var created = false; // track create vs. update if (productQuantity == null) { productQuantity = new ProductQuantity(); productQuantity.product_id = req.params.product_id; created = true; } // set/update the onhand quantity and save productQuantity.quantity_onhand = req.body.quantity_onhand; productQuantity.save(function(err) { if (err) { res.status(500); res.setHeader('Content-Type', 'application/vnd.error+json'); res.json({ message: "Failed to save productQuantity"}); } else { // return the appropriate response code, based // on whether we created or updated a ProductQuantity if (created) { res.status(201); } else { res.status(200); } res.setHeader('Content-Type', 'application/hal+json'); var resource = halson({ product_id: productQuantity.product_id, quantity_onhand: productQuantity.quantity_onhand, created_at: productQuantity.created_at }).addLink('self', '/product_quantities/'+productQuantity.product_id) res.send(JSON.stringify(resource)); } }); }); } });

Next, we need to register a complementary GET endpoint to lookup the current onhand quantity for the product:

router.get('/product_quantities/:product_id', function(req, res) { ProductQuantity.findOne({product_id: req.params.product_id}, function(err, productQuantity) { if (err) { res.status(500); res.setHeader('Content-Type', 'application/vnd.error+json'); res.json({ message: "Failed to fetch ProductQuantities"}); } else if (productQuantity == null) { res.status(404); res.setHeader('Content-Type', 'application/vnd.error+json'); res.json({ message: "ProductQuantity not found for product_id "+req.params.product_id}); } else { res.status(200); res.setHeader('Content-Type', 'application/hal+json'); var resource = halson({ product_id: productQuantity.product_id, quantity_onhand: productQuantity.quantity_onhand, created_at: productQuantity.created_at }).addLink('self', '/product_quantities/'+productQuantity.product_id) res.send(JSON.stringify(resource)); } }); });

Finally, we need to register the router with the app and bind to the port:

// Register our route app.use('/', router); // Start the server app.listen(port); console.log('Running on port ' + port);

As you probably realize, our API could do a lot more, but for this example we'll keep it as simple as possible.

Configuring Mongoid with Sinatra

And finally, we need to setup a configuration file for mongoose:

// config.js module.exports = { db: { production: "mongodb://"+process.env.MONGODB_ADDRESS+":27017/product_quantities", development: "mongodb://"+process.env.MONGODB_ADDRESS+":27017/product_quantities", } };

While this example creates the same URL using the environment variable MONGODB_ADDRESS , taking this approach allows us to support different environments in the future.

Now all we need is a package.json and we can run our API locally:

// package.json { "name": "microservices-node-inventory", "dependencies": { "body-parser": "~1.0.1", "express": "~4.0.0", "halson": "~2.3.1", "mongoose": "~4.0.0" } }

Trying our API locally

So far, we have the following files:

server.js - the Node Express API source

- the Node Express API source config.js - the configuration that informs mongoose where our MongoDB lives

- the configuration that informs mongoose where our MongoDB lives app/models/product_quantity - the model that will represent documents in our MongoDB collection

- the model that will represent documents in our MongoDB collection package.json - the modules we need to run our API

Next, let's try running it locally (if you have MongoDB installed locally):

export MONGODB_ADDRESS=127.0.0.1 npm install npm start

Create/update a product's onhand quantity, given the product identifier of 12345:

curl -X PUT http://127.0.0.1:8080/product_quantities/12345 -d "quantity_onhand=15"

You should receive a 201 OK response code with the resulting payload:

{ "product_id": "12345", "quantity_onhand": 15, "created_at": "2016-04-11T16:43:29.000Z", "_links": { "self": { "href": "/product_quantities/12345" } } }

Subsequent PUT calls will result in a 200 OK to indicate the update was successful.

To fetch the onhand quantities for the product:

curl -X GET http://127.0.0.1:8080/product_quantities/12345

This should result in a 200 OK response code along with the same details:

{ "product_id": "12345", "quantity_onhand": 15, "created_at": "2016-04-11T16:43:29.000Z", "_links": { "self": { "href": "/product_quantities/12345" } } }

That's it! Next, we'll look at how we can use Docker on our local development machine instead, to ensure we have everything working before pushing it to the cloud using Cloud 66.

Dockerize and run your API locally

For this portion of the walkthrough, you'll need to have Docker installed. Detailed instructions can be found in Andreas' previous post on 'How to deploy Rails with Docker'.

Assuming you already have Docker installed on your local development machine, let's setup a Dockerfile to tell Docker how to build a container image for our API:

FROM node:latest RUN mkdir -p /usr/src/app WORKDIR /usr/src/app COPY . /usr/src/app EXPOSE 8080 RUN npm install CMD ["npm", "start"]

This is a simple, barebones Docker install that's based on an Ubuntu image with the latest version of Node pre-installed. It then uses npm to install our modules and copies all of the files for our API into the container image.

Finally, let's setup a Docker Compose configuration file that will launch our API, and a MongoDB instance in two separate containers:

--- inventory: build: . command: npm start ports: - "8080:8080" links: - mongodb - mongodb:mongodb.cloud66.local environment: - NODE_ENV=production - MONGODB_ADDRESS=mongodb mongodb: image: mongo

We now have 2 additional files in our product:

Dockerfile

docker-compose.yml

To ensure that our API uses the MongoDB instance running in our container, we are setting the MONGODB_ADDRESS environment variable. Notice that the hostname we're using for our MongoDB host is the same name as the links defined in our docker-compose.yml file, above. By linking our inventory container to the mongodb container, we can use the container name in our database configuration, without knowing what internal IP address Docker assigned to the MongoDB container.

We can now use Docker Compose to build our images:

docker-compose build

Followed by running both containers:

docker-compose up

You should see a few log lines from MongoDB that I'll omit, followed by these logs from Node:

inventory_1 | npm info it worked if it ends with ok inventory_1 | npm info using npm@3.8.3 inventory_1 | npm info using node@v5.10.1 inventory_1 | npm info lifecycle microservices-node-express-products@1.0.0~prestart: microservices-node-express-products@1.0.0 inventory_1 | npm info lifecycle microservices-node-express-products@1.0.0~start: microservices-node-express-products@1.0.0 inventory_1 | inventory_1 | > microservices-node-express-products@1.0.0 start /usr/src/app inventory_1 | > node server.js inventory_1 | inventory_1 | Running on port 8080

Just like when we run it locally, our API will be available on port 8080.

Note: If you're still running the API server in another console, it will already occupy port 8080 and will need to be shutdown prior to running docker-compose up . The same thing should be said of MongoDB, so you may need to stop any locally running MongoDB on your development machine.

Try running the same cURL commands above using this new Docker Compose-based environment instead. Once you're done running Docker Compose, you can run the following command to shutdown both containers:

docker-compose stop

Preparing to deploy your Dockerized API using Cloud 66

Once you've been able to Dockerize your application on your local machine, the next step is to deploy it to your favorite cloud vendor using Cloud 66.

The only additional step is to prepare a service.yml definition that's used by Cloud 66 to build your Docker stack, including all containers. This file is used in place of docker-compose.yml when deploying to Cloud 66, as it supports additional configurations specific to a Cloud 66 deployment. More details on how it works and the options available can be found in the article titled "Docker service configuration".

For our API, we'll use the following service.yml :

--- services: inventory: git_url: git@github.com:launchany/microservices-node-inventory.git git_branch: master command: npm start build_root: . ports: - container: 8080 http: 80 https: 443 env_vars: NODE_ENV: production databases: - mongodb

Be sure to set the git_url to your own repository, or feel free to use my public Github repo for this example shown above. You can also fork this repo and customize or extend it as you wish.

This service.yml does a few things:

Sets the git branch to use (master)

Defines the command to run our service - in this case, we use npm to run our API directly

The ports to map this service for external availability. We are mapping port 8080 to be externalized to port 80 and 443 (if we add decide to support TLS in the future)

Define the environment variables to pass when the command is run. For Node, we use NODE_ENV

Define our hosted database, in this case MongoDB. Please note that databases run on the host server rather than in the container for performance. You may also opt to use your own database instance rather than hosting it with Cloud 66. Note that the MONGODB_ADDRESS environment variable is set by Cloud 66 when using their own MongoDB hosting solution

Deploy your Dockerized API using Cloud 66

With all of your files properly configured, committed and pushed to your git repository, it's time to deploy your stack to Cloud 66.

Login or Signup for Cloud 66. Create a New Stack, selecting the 'Docker Stack' option. Give your stack a name and select an environment. Switch to the advanced tab and paste the contents of the service.yml file (generated using Starter or by hand). Click the button to go to the next step. Select your deployment target and cloud provider as normal, and choose if you want to deploy your databases locally, on a dedicated server or to an external server.

For this example, I named the service 'products', used my products service Github repository, selected AWS, and decided to deploy the database locally on the host.

Note: If this is your first time setting up Cloud 66, you'll need to register the SSH key provided by Cloud 66 with your git repository service. This will allow Cloud 66 access to your repository (e.g. Github) for deployments, otherwise you will experience a deployment error.

Once completed, Cloud 66 will provision a server from your cloud vendor, build your API container image, deploy it to your server, and wire everything up. No Chef or Puppet scripting is required. You can then use the IP or hostname of your new deployment to access your API, just as you did above.

Tips for debugging your Cloud 66 Docker Stack

Along the way, I encountered a few issues. Some of them as a result of skipping the use of the Cloud 66 Starter tool, as I prefer to better understand the details before using automation tools. Other issues were a result of misconfiguration. Here are some tips to help you when deploying for the first time:

If your docker image is created successfully but you encounter a startup error of your application, read the LiveLogs article to better understand how logs may be viewed across your containers from within the Cloud 66 dashboard. If no logs are visible, you may need to ssh to your server.

Unlike Heroku, Cloud 66 Docker stacks do not automatically generate database config files. The steps above for creating a config.js config file are required to properly setup mongoose.

What else can Cloud 66 do to manage my production containers?

This workflow provides a concise introduction to deploying a Node-based API to Cloud 66 using their managed container services. Here are some other key features that I've found important for managing APIs in production with Cloud 66 for Docker:

Continuous deployment support - Since Cloud 66 provides a build grid for creating new images, it can automate the full deployment process from the time you push new code on the git branch for your environment using redeployment hooks. This saved me lots of scripting effort.

- Since Cloud 66 provides a build grid for creating new images, it can automate the full deployment process from the time you push new code on the git branch for your environment using redeployment hooks. This saved me lots of scripting effort. Selective container deployment - I can choose to redeploy specific services within my stack, allowing me to manually or automatically deploy new versions of my services without requiring all services to be deployed at once (and without the heavy scripting required to make this happen easily).

- I can choose to redeploy specific services within my stack, allowing me to manually or automatically deploy new versions of my services without requiring all services to be deployed at once (and without the heavy scripting required to make this happen easily). Parallel deployment - Since Cloud 66 manages the internal network infrastructure, I can push new deployments in parallel to an existing stack, without worrying about dropping requests. Incoming requests already in progress are completed, while new traffic is directed to the updated stack.

- Since Cloud 66 manages the internal network infrastructure, I can push new deployments in parallel to an existing stack, without worrying about dropping requests. Incoming requests already in progress are completed, while new traffic is directed to the updated stack. Multi-cloud failover - While many cloud providers can provide high availability within a region, Cloud 66 supports app failover to new regions or even completely different cloud vendors.

- While many cloud providers can provide high availability within a region, Cloud 66 supports app failover to new regions or even completely different cloud vendors. Internal DNS - The elastic DNS service automatically assigns internal DNS names for databases and container services and is deployment-aware. This makes it easy to integrate services without worrying about referencing the wrong version of a service during or after a new service deployment

In an upcoming article, I'll be combining this REST-based Inventory microservice, with our previous Ruby-based Products microservice into the beginnings of a Docker-based microservices architecture.