Containers

My first decision was to use containers to encapsulate my api server and database. My main motivation here was to simplify deployment — it is far easier for me to upload a set of images to Docker Hub and do a simple git pull && docker-compose up -d than it is to write some scripts to set up my environment each time. Docker with docker-compose also allows for easy management of multiple environments for development, testing, and production. It also simplifies startup and teardown locally, where I am running several separate services for the app.

So, I needed to create a Dockerfile and a docker-compose.yml :

Dockerfile:

FROM mhart/alpine-node:9

RUN mkdir www/

WORKDIR www/

ADD . .

RUN npm install && npm run build

CMD npm run start

docker-compose.yml:

version: “3”

services:

api:

build: ./api

image: crypto-dca-api:latest

container_name: crypto-dca-api

env_file: config/.env

environment:

- NODE_ENV=production

ports:

- 8088:8088 db:

build: ./db

image: crypto-dca-db:latest

container_name: crypto-dca-db

env_file: config/.env

volumes:

- crypto-dca-db:/var/lib/postgresql/data

ports:

- 5432:5432 volumes:

crypto-dca-db:

driver: local

With this, I am able to run docker-compose up -d from the root of my project and spin up a server and database using a persistent volume. I also ended up creating separate containers for development, and a different environment for testing. I’ll leave those as an exercise for the reader, this is just a basic example of what the production setup looks like.

To learn more about Docker and docker-compose, check out the official tutorial

The server

You need a server to connect to your database and respond to GraphQL requests from the client.

I chose node as my server-side language because I come from a front-end background and it made sense to leverage my domain expertise when building a server. Node is a good choice because it has extremely robust community support around GraphQL and is highly portable and easy to run in a container.

With new async/await support as of version 8, the synchronicity model is much easier to manage which is a huge boon when building a highly asynchronous API.

The database

I went with postgres as my database for a couple of reasons: It has extensive community support with ORMs, it is open-source, and it is lightweight. MySQL would work just as well, though PostgreSQL has a reputation for scaling better. Ultimately, the DB is well abstracted behind the ORM so it is relatively simple to swap later if necessary.

Here is the (dead simple) Dockerfile for my database:

FROM postgres:10

COPY ./setup-db.sh /docker-entrypoint-initdb.d/setup-db.sh

And my init file:

#!/bin/bash set -e POSTGRES="psql --username ${POSTGRES_USER}"

DATABASES=($POSTGRES_DEV_DB $POSTGRES_TEST_DB $POSTGRES_PROD_DB) for i in ${DATABASES[@]}; do

echo "Creating database: ${i}"

psql -U postgres -tc "SELECT 1 FROM pg_database WHERE datname = '${i}'" | grep -q 1 || psql -U postgres -c "CREATE DATABASE \"${i}\""

done

The init file simply creates the databases that do not exist on container startup.

The request server

With the basics out of the way, we can start putting the pieces together to actually serve requests to our API. I chose express as a basic webserver framework because of its ubiquity in the node ecosystem, plugin support, and simple API.

Express allows us to listen on a port and respond to HTTP requests, but we need another layer to allow us to digest and respond to GraphQL requests. For this, I use apollo-server-express. It has an extremely simple API and does some mapping to allow us to define our schema in the node GraphQL schema language. Here’s what it looks like in action:

const bodyParser = require('body-parser');

const { graphqlExpress, graphiqlExpress } = require('apollo-server-express');

const logger = require('../helpers/logger');

const { NODE_ENV } = process.env; module.exports = function (app) {

const schema = require('../schema'); app.use('/graphql', bodyParser.json(), (req, res, next) =>

graphqlExpress({

schema,

context: { user: req.user }

})(req, res, next)

); if (NODE_ENV === 'development') {

app.get('/graphiql', graphiqlExpress({

endpointURL: '/graphql'

}));

} logger.info(`Running a GraphQL API server at /graphql`);

}

All we’re doing here is setting up our root endpoints, we still need to define the mapping between the graphql query language and our database in our schema.

The ORM

In order to map between the query language of your database (SQL) and the native language of your server (Javascript), you typically use an ORM. There are a few popular ORMs in Javascript, but I decided to go with Sequelize because it is the most heavily maintained, comes with a CLI tool, and has lots of active community support.

To connect Sequelize to your database, you need to do a few things. Unfortunately, there is a gap between the existing version of sequelize-cli and the latest version of Sequelize (4). You can still use sequelize-cli to scaffold a Sequelize app, but you may need to make some modifications, especially to cli-generated models.

To get started, you can install sequelize-cli and run sequelize init from your project directory. By default, this will create a new directory structure with config, models, migrations, and seeders, as well as an index.js file in the models directory that creates a new instance of the ORM with the given configuration and associates all models with that instance.

I ended up splitting this into 2 files for easier testing:

build-db.js :

var env = process.env.NODE_ENV || 'development';

var config = require(__dirname + '/../config/config.js')[env];

var Sequelize = require('sequelize'); module.exports = function () { return config.use_env_variable ?

new Sequelize(process.env[config.use_env_variable]) :

new Sequelize(config.database, config.username, config.password, config);

}

decorate-db.js :

const fs = require('fs');

const path = require('path');

const Sequelize = require('sequelize');

const modelPath = path.join(__dirname, '../models'); module.exports = function (sequelize) { const db = {}; fs

.readdirSync(modelPath)

.filter(file => {

return file.indexOf('.') === -1

})

.forEach(folder => {

const model = sequelize['import'](

path.join(modelPath, folder, 'index.js')

);

db[model.name] = model;

}); Object.keys(db).forEach(modelName => {

if (db[modelName].associate) {

db[modelName].associate(db);

}

}); db.sequelize = sequelize;

db.Sequelize = Sequelize; return db;

}

From there, you can define your models by hand or use the CLI tool to help build them.

models/Wallet/index.js :

const { v4 } = require('uuid'); module.exports = (sequelize, DataTypes) => {

const Wallet = sequelize.define('Wallet', {

id: {

primaryKey: true,

type: DataTypes.STRING,

defaultValue: () => v4()

},

name: {

type: DataTypes.STRING,

allowNull: false

},

address: {

type: DataTypes.STRING,

allowNull: false

},

local: {

type: DataTypes.BOOLEAN,

allowNull: false

}

}); Wallet.associate = function ({

User,

Wallet

}) {

Wallet.belongsTo(User);

} return Wallet;

};

Once you are done, you should have an object in JS that you can import into other files that gives you full query access to your database.

One trick I found for easily generating both migrations and my initial database shape was to actually dump the state of my DB out to a file and use that as a raw SQL import. This saved me a lot of time writing migration syntax (which is not quite the same as model definition syntax), as well as writing seeders — I just have a few SQL files that I can load up as test states or initial seed state for development.

migrations/1-initial-state.js :

const { readFile } = require('fs'); module.exports = {

up(migration) {

return new Promise((res, rej) => {

readFile(

'migrations/initial-tables.sql',

'utf8',

(err, sql) => {

if (err) return rej(err);

migration.sequelize.query(sql, { raw: true })

.then(res)

.catch(rej);

});

});

},

down: (migration) => {

return migration.dropAllTables();

}

}

The ORM to GraphQL adapter

One critical package that we need to add is the code that allows us to easily map Sequelize models to GraphQL types, queries, and mutations. The aptly-named graphql-sequelize package does this quite well, providing two excellent abstractions that I will discuss below — a resolver for mapping GraphQL queries to Sequelize operations, and an attributeFields mapping allowing us to re-use our model definitions as GraphQL type field lists.

The GraphQL schema

Whew! All that work and we haven’t even written anything that GraphQL can understand yet. Don’t worry, we’re getting there. Now that we have a Javascript representation of our database, we need to map that to a GraphQL schema.

There are two pieces of a GraphQL schema that we need to create. The first is a set of types that allows us to properly specify the form of our data. The second is the list of queries and mutations that we can use to search and manipulate our data.

Types

There are two main ways to create types. The first is a more manual process, where you specify the exact shape of the type for each model. That looks something like this:

models/Wallet/type.js :

const {

GraphQLObjectType,

GraphQLNonNull,

GraphQLBoolean,

GraphQLString

} = require('graphql'); module.exports = new GraphQLObjectType({

name: 'Wallet',

description: 'A wallet address',

fields: () => ({

id: {

type: new GraphQLNonNull(GraphQLString),

description: 'The id of the wallet',

},

name: {

type: new GraphQLNonNull(GraphQLString),

description: 'The name of the wallet',

},

address: {

type: new GraphQLNonNull(GraphQLString),

description: 'The address of the wallet',

},

local: {

type: new GraphQLNonNull(GraphQLBoolean),

description: 'Whether the wallet is local or on an exchange'

}

})

});

This has the advantage of allowing us fine-grained control over all of the fields in our type, the ability to add metadata, and to create additional computed fields on a type that might not exist on the model.

The disadvantage is definitely the verbosity — it’s a lot of work to basically re-define all of your models as GraphQL types. Luckily, the graphql-sequelize package gives us a shortcut through attributeFields :

const {

GraphQLObjectType,

GraphQLNonNull,

GraphQLBoolean,

GraphQLString

} = require('graphql');

const { attributeFields } = require('graphql-sequelize');

const { Wallet } = require('../'); module.exports = new GraphQLObjectType({

name: 'Wallet',

description: 'A wallet address',

fields: attributeFields(Wallet);

});

This saves us a good deal of typing but removes some of the expressiveness and discoverability that GraphQL enables us to create. I opted to do all of my types long-hand, but at the end of the day it’s up to you.

Queries and Mutations

Types represent pieces of data in our schema, while queries and mutations represent ways of interacting with those pieces of data. I decided to create a few basic queries for each of my models — some enabling lookup, and some enabling modification. The resolver provided by graphql-sequelize makes creating these an absolute breeze, and begins to show some of the power behind coupling GraphQL with a good ORM.

models/Wallet/queries.js :

const {

GraphQLNonNull,

GraphQLString,

GraphQLList

} = require('graphql');

const { Op: {iLike} } = require('sequelize');

const { resolver } = require('graphql-sequelize');

const walletType = require('./type');

const sort = require('../../helpers/sort'); module.exports = Wallet => ({

wallet: {

type: walletType,

args: {

id: {

description: 'ID of wallet',

type: new GraphQLNonNull(GraphQLString)

}

},

resolve: resolver(Wallet, {

after: result => result.length ? result[0] : result

})

},

wallets: {

type: new GraphQLList(walletType),

resolve: resolver(Wallet)

},

walletSearch: {

type: new GraphQLList(walletType),

args: {

query: {

description: 'Fuzzy-matched name of wallet',

type: new GraphQLNonNull(GraphQLString)

}

},

resolve: resolver(Wallet, {

dataLoader: false,

before: (findOptions, args) => ({

where: {

name: { [iLike]: `%${args.query}%` },

},

order: [['name', 'ASC']],

...findOptions

}),

after: sort

})

}

})

You can see that for the most part, you just define the model and response type and graphql-sequelize handles the gruntwork of doing the lookup for you.

Mutations are quite similar, though you need to do the legwork of updating the model yourself:

model/User/mutations.js :

const {

GraphQLNonNull,

GraphQLString

} = require('graphql');

const userType = require('./type');

const { resolver } = require('graphql-sequelize'); module.exports = User => ({

createUser: {

type: userType,

args: {

name: {

description: 'Unique username',

type: new GraphQLNonNull(GraphQLString)

},

password: {

description: 'Password',

type: new GraphQLNonNull(GraphQLString)

}

},

resolve: async function(root, {name, password}, context, info){

const user = await User.create({

name,

password

});

return await resolver(User)(root, {id: user.id}, context, info);

}

}

});

With our types, queries, and mutations created, we just need to stitch everything together into a single schema and plug it into apollo-server-express:

schema.js :

const {

GraphQLObjectType,

GraphQLSchema,

} = require('graphql');

const { queries, mutations } = require('./models/fields'); module.exports = new GraphQLSchema({

query: new GraphQLObjectType({

name: 'RootQuery',

fields: () => queries

}),

mutation: new GraphQLObjectType({

name: 'RootMutation',

fields: () => mutations

})

});

Et voila! We can now start hitting our server on /graphql and /graphiql and interacting with the schema on top of our database.

We do need a couple more pieces in order to have a robust API solution, however. Just being able to play with our API doesn’t mean that it’s tested, maintainable, or secured. I’ll talk briefly about how to check those pieces off as well.

Logging

Logging is a vital part of any project. It allows us to easily identify exactly what is happening with our app and track down bugs as they happen. After playing around with hand-rolled logs, I decided to outsource to a well-known package called winston. It allows me to set global log levels and to log to stout , sterr , file, or remote API if I want.

helpers/logger.js :

const { NODE_ENV } = process.env;

const winston = require('winston'); let level, transports; switch (NODE_ENV) {

case 'development':

level = 'verbose';

transports = [new winston.transports.Console()];

break; case 'production':

level = 'verbose';

transports = [

new winston.transports.File({

filename: 'error.log',

level: 'error'

}),

new winston.transports.File({

filename: 'combined.log',

level: 'verbose'

})

]

break;

} module.exports = winston.createLogger({

level, transports

});

This allows me fine-grained control over exactly what gets logged where. In code I can specify the level of the message like so: logger.verbose(message);

Authentication and authorization

Any API, especially one that allows modification or retrieval of sensitive data, will need authentication and authorization. This is the best article I found on this subject, and it lead me to implement authentication separately from my GraphQL API.

To piece it together, I used the stack of passport, express-session, and connect-session-sequelize. This allows me to use a passport provider to authenticate a user, then save the authentication token in a database session and store data in a cookie. On request, I can parse the cookie and use it to identify the user making the request. Here’s what it looks like:

routes/auth.js :

const bodyParser = require('body-parser');

const passport = require('passport');

const expressSession = require('express-session');

const Store = require('connect-session-sequelize')(expressSession.Store);

const flash = require('express-flash');

const LocalStrategy = require('passport-local').Strategy;

const logger = require('../helpers/logger');

const {

SESSION_KEY

} = process.env; module.exports = function (app) {

const db = require('../helpers/db').up(); passport.use('local', new LocalStrategy(

async (username, password, done) => {

const { validLogin, user } = await db.User.checkPassword(username, password)

return validLogin ?

done(null, user) :

done(null, false, {

message: 'Invalid username or password'

});

}

)); passport.serializeUser(function(user, done) {

done(null, user.id);

}); passport.deserializeUser(async function(id, done) {

const user = await db.User.findById(id);

done(null, user);

}); app.use(expressSession({

secret: SESSION_KEY,

store: new Store({

db: db.sequelize

}),

resave: false,

saveUninitialized: false

})) app.use(passport.initialize());

app.use(passport.session());

app.use(flash()); app.post(

'/auth/local',

bodyParser.urlencoded({ extended: true }),

passport.authenticate('local'),

(req, res) => res.send(req.user.id)

); app.post(

'/logout',

async (req, res) => {

req.logout();

req.session.destroy(function (err) {

err && logger.error(err);

res.clearCookie('connect.sid');

res.sendStatus(200);

})

}

)

}

This allows us to do authorization because it places the user on every request object. So, if we look back at our GraphQL route, we see:

app.use('/graphql', bodyParser.json(), (req, res, next) =>

graphqlExpress({

schema,

context: { user: req.user }

})(req, res, next));

This allows us to access the current user as context in any of our queries. If we want to do access control, we can check in resolve method for any protected query or mutation whether the current user is allowed to perform that particular action.

Tests

Ah, tests. The one thing we love to either obsess over or forget about entirely. Finding a good way to test this API has been more than a little challenging — Sequelize in particular seems to struggle with resetting to a good state and closing connections during testing. You’ll notice throughout the code that there are a lot of calls to helpers/db around — this allows us to lazily instantiate the DB when required, rather than assuming that the connection will be created at the application level.

Some ground rules for testing this app:

Most tests should be integration tests. docker-compose makes it easy to spin up a sandboxed environment for tests and respond on a given port, let’s take advantage of that and write our tests from the perspective of a client interacting with our API rather than the developer of the API We have migrations, seeds, and the ability to start and stop the database. We should leverage that to test from as clean a slate as possible for each test. Let’s not carry over state between tests. We should be able to watch our tests as we develop to aid in writing tests alongside code

So, with this in mind, here is how I created my test framework. I started by creating a new docker-compose service for my tests:

api-test:

build:

context: ./api

dockerfile: Dockerfile-dev

image: crypto-dca-api:latest

container_name: crypto-dca-api-test

env_file: config/.env

environment:

- NODE_ENV=test

entrypoint: npm run watch-tests

volumes:

- ./api:/www

This allows me to set the node-env and run a custom command for watching tests. That watch-tests command is defined here in my package.json :

"watch-tests": "NODE_ENV=test mocha --exit --watch ./test/{unit,integration}/index.js"

This watches both my unit and integration test entrypoints. Those entrypoints allow me to do test-group startup and cleanup operations.

Here is what my integration runner looks like:

test/integration/index.js :

const { describe, before, after} = require('mocha');

const { up } = require('../../helpers/db');

const { start, stop } = require('../../helpers/server');

const testDir = require('../helpers/test-dir');

const runMigration = require('../helpers/migration'); let db, migrate; describe('integration tests', () => {

before(async () => {

db = up();

migrate = runMigration(db);

await migrate.down();

await migrate.up();

await start({ db });

}); [

'db',

'auth',

'graphql',

'rpc'

].forEach(dir =>

testDir(`integration/${dir}`)

) after(async () => {

await migrate.down();

await stop();

});

}) module.exports = () => db;

This ensures we are starting from a clean DB and server state, and that we clean up after ourselves.

Here’s a sample integration test:

const { expect } = require('chai');

const { describe, it } = require('mocha');

const fetch = require('node-fetch');

const { name } = require('../../helpers/sort'); describe('wallet query', () => {

it('should be able to query all wallets', async () => {

const query = encodeURIComponent(`

{

wallets {

name,

address,

local

}

}

`); const resp = await fetch(`http://localhost:8088/graphql?query=${query}`)

const { data: { wallets } } = await resp.json(); expect(

wallets.sort(name)

).to.deep.equal([

{

name: "local BTC",

address: "abacadsf",

local: true

},

{

name: "remote BTC",

address: "asdfdcvzdsfasd",

local: false

},

{

name: "remote USDT",

address: "vczvsadf",

local: false

}

])

});

})