Lets move on. In this part of the series, we want to deploy our first application. If you come from NodeJS, the deployment life cycle looks like this:

Possible breaks during a deployment and in production

With NodeJS, you can push any code to a production server. You have to have good tests, ESLint and other tools to catch undefined and Type errors.

In an ideal world, we have a development cycle which looks like this:

An ideal world scenario where the code fails before it is getting deployed

So we want to break things as early and close to the code (your local machine) as possible. Once we figured out a working code base, we would like to bring exactly this working solution onto a server. Because of Rusts Type System and strong compiler, we would be able to pack a working binary and move it to production. Tests would cover the rest of the errors.

Rust moves possible errors closer to the coding environment

a) The Rust Compiler will catch a lot of problems, almost all of them.

b) You can catch the rest with good tests (in our case: Error handling when receiving the wrong parameters).

c) After you can compile your Rust Code, you have a binary which can be shipped in many different ways.

Difference between local and production-ready code

When we talk about deploying, we have to make sure that our code is able to:

randomly assign a PORT based on the environment it is running

handle errors gracefully

respond to not expected input with proper return codes and messages

fail early in the deployment pipeline with a proper test setup

log events so errors can be traced

In this article we will cover the first must-have (randomly assigning a PORT). Each article in the series will cover the rest of the requirements.

Four different deployment options

We generally have different deployment and hosting options. Some are more suited for large scale application and some are better for private projects and to get a project off the ground without too much complexity. Our options are:

Managed Deployments / Hosting (Heroku)

Self managed via Docker and a Docker registry

Self managed via Docker and a Git registry

Managed Serverless Lambda functions (AWS Lambda, ZEIT now)

We will cover each of these options in this article and see advantages, disadvantages and how to prepare your Rust Code so it can be deployed (in the best possible way).

Building the first version of your app

As we said in the beginning, we need an idea and what we want to build. Even if we map out a bigger picture of the application in the next article (03/x), we can get started and choose a framework we want to build it with:

rocket actix gotham tide (work in progress)

As seen in the first article, you can go lower level if you want:

We will pick one framework for the written version of this article. I will pick tide, since I am planning to contribute to it more in the future. I will map out solutions for rocket and actix in the GitHub repository for this series.

Set up our app

We want to make sure to use asynchronous code, which is not in Rust stable yet. Therefore we need to install and set the nightly version of Rust:

$ rustup install nightly-2019-02-25

$ rustup default nightly

Now we can create our application. Open a terminal window and enter:

$ cargo new my-cool-web-app

$ cd my-cool-web-app

This will generate our first folder structure. The bare bones of a running web app with tide look like this:

Cargo.toml

[package]

name = "my-cool-web-app"

version = "0.1.0"

authors = ["YOUR NAME + EMAIL"]

edition = "2018" [dependencies]

tide = "0.0.5"

main.rs

#![feature(async_await)] fn main() {

let mut app = tide::App::new(());

app.at("/").get(async || "Hello, world!"); app.serve();

}

As we said earlier, we need to give the hosting environment the chance to assign a PORT to our application.

Our main.rs has to accompany these requirements:

#![feature(async_await)] // to be able to read environment variables

use std::env; // to be able to pass a different base configuration to our app

use tide::{configuration::Configuration}; // we need to read the PORT from the env variable (Heroku sets it)

fn get_server_port() -> u16 {

env::var("PORT")

.ok()

.and_then(|p| p.parse().ok())

.unwrap_or(8181)

} fn main() {

let mut app = tide::App::new(());

let app_config = Configuration::build()

.address(String::from("0.0.0.0"))

.port(get_server_port())

.finalize(); app.config(app_config);

app.at("/").get(async || "Hello, World!"); app.serve();

}

With this setup ready, we can go over each deployment option.

Managed Deployments via Heroku

Managed environments are for the most part just an abstraction. They internally do the same as you would with your own pipeline: Push code to a git repository. A “hook” is watching this repository and on changes will start to compile the latest version and run it. For you however, it’s just a git push heroku master .

High level overview of deployments via Heroku

To get started, you need a Heroku account (free). Login with your new account and create a new app:

“Create new app” interface in Heroku

After clicking “Create app”, Heroku explains under the “Deploy” tab how to push your code to their servers:

Heroku explains what you need to do to push and deploy code

Prepare your code

First, we need to be able to push our code base to the remote location (Heroku). Therefore please install the Heroku toolchain. Afterwards we can add the remote location to our GIT repository:

$ cd my-cool-web-app

$ heroku login

$ heroku git:remote -a my-cool-web-app

Next, we need to tell Heroku how to run our application after it is build. Heroku expects a file with the name Procfile, which has the start command in it:

$ touch Procfile

And put the following line it it:

web ./target/release/my-cool-web-app

We also have to tell Heroku which version of Rust we are using. Since we want to use nightly, we create a file called RustConfig in the root directory:

$ touch RustConfig

with the following line:

VERSION=nightly

Caveat

Rust is so new that Heroku doesn’t support it out of the box. We need to install and activate a “buildpack” for Rust. So inside the root directory of your application, execute the following commands

$ heroku create --buildpack emk/rust

$ heroku buildbpacks:set emk/rust

This will activate the language support for Rust.

Now we can

$ git add .

$ git commit -m "Init"

$ git push heroku master

When succeeded, we go back to the Heroku dashboard in the browser and click on the the generated domain (under “Settings”). A browser windiw should open and display “Hello, World!”.

Summary

Heroku makes it easy to deploy your application

In less then 5 minutes you have a running version of your app live

You can assign your own domain and activate HTTPS (if you pay for it)

Heroku ist the best option when it comes to this tutorial and starting side projects: Cheap, easy to use and removes the overhead of deplyoments especially in the beginning

Docker

Using Docker has the huge advantage of being free in choosing your pipelines and environments. You can either build the image locally and push it as-is to a Docker registry. From there a server can take(download) and execute ( docker run ) it. Or you create a blueprint (Dockerfile) which other service can use to build on their servers.

If you are using Docker for your deployments, you have two options. The first one is to push your code (with a Dockerfile) to a Git registry (like GitHub or Bitbucket) and then have a configured deployment server which listens to changes, SSHs into the Git registry, takes the code, deploys and runs it.

Using Docker and a Git registry to publish your code

Your second option is to use a Docker registry. There you have the advantage to pre build your container and ship it as-it-is. This makes it sometimes faster to run deployments and you have to ship less code (especially in case of Rust).

Using Docker and a Docker registry to ship and publish your container

We can use Rusts feature of being able to be compiled to a binary. We can even go one step further and compile a static Rust binary with no external dependencies. What we would need for this, is:

Build a Rust binary

Statically linked the needed C libraries to it so it can run on it’s own

The result would be to have a binary which doesn’t even need Rust to run. Thanks to the Open Source community and Erik Kidd, there is already a solution out there which helps us with that.

The result is a super small Docker image with no external dependencies. Meet rust-musl-builder. It is a Docker image which helps you build static Rust binaries. It will download the whole image just after the first execution.

Everything we type and create happens from the root directory of our application.

$ cd my-cool-web-app

Before we create our Dockerfile, lets see what we actually trying to do. We are using the rust-musl-builder to statically link the musl-libc library into our binary.

$ docker run --rm -it -v "$(pwd)":/home/rust/src ekidd/rust-musl-builder cargo build --release

This will create our super small binary. You can inspect it like that:

$ ls -lh target/x86_64-unknown-linux-musl/release/my-cool-web-app

It is just a few MB small (in my example: 4,4MB). To be able to recreate this procedure over and over again, and not just on our local machine but also in a deployment pipeline on different servers, we create a multi-stage Dockerfile.

FROM ekidd/rust-musl-builder:nightly AS build

COPY . ./

RUN sudo chown -R rust:rust .

RUN cargo build --release FROM scratch

COPY --from=build /home/rust/src/target/x86_64-unknown-linux-musl/release/my-cool-web-app /

ENV PORT 8181

EXPOSE ${PORT}

CMD ["/my-cool-web-app"]

You can build the image now via

$ docker build -t my-cool-web-app:latest .

And run it with

$ docker run -d --rm -P --name heroku heroku:latest

Now you can open your browser via:

$ open http://$(docker container port my-cool-web-app 8181)

We just created a super minimal Docker image which contains our binary with no external dependencies. You can inspect your just created image via

$ docker image ls my-cool-web-app

The size of our Docker image is super small

Summary

Docker is a beast, but when used wisely can be quite helpful

Especially with Rust: You can create statically linked binaries which are super small and don’t even need a Rust environment to run in

You also have much more options to host and run your application when choosing Docker

However, managed hosting environments like Heroku don’t allow pushing Docker images to their environment

Serverless runtimes — ZEIT/now

Serverless is a different mindset then the first two options. Serverless also means stateless, so you are not building web applications but functions. Instead of having API endpoints build into your app, you basically just have those API endpoints (in serverless terms: handlers). Our web frameworks like rocket and actix might be an overkill here. Right now, ZEIT is not supporting Rust nightly builds in their new serverless environment.

So instead of creating a binary (with cargo new web-app ), we create a library:

$ cargo new now-service --lib

$ cd now-service

Here we have to create a file called now.json

And our src/lib.rs example looks like this:

use http::{Request, Response, StatusCode, header}; fn handler(request: Request<()>) -> http::Result<Response<String>> {

let response = Response::builder()

.status(StatusCode::OK)

.header(header::CONTENT_TYPE, "text/html")

.body("<!doctype html><html><head><title>A simple deployment with Now!</title></head><body><h1>Welcome to Rust on Now</h1></body></html>".to_string())

.expect("failed to render response"); Ok(response)

}

As with Heroku, you need to install the ZEIT toolchain, which is called “now”. There are several options. If you are on macOS, you can do it via:

$ brew cask install now

Which installs the Now application. Find it in your /Applications folder and open it. You can finish the installation by typing in your email address. This will also install the command line tool chain.

That’s basically it. You can type

$ now

and hit Enter. This will start the upload of your application. Login to your ZEIT dashboard and click on the provided link.

All deployments are getting listed with a link to the deployed application

Summary