As an aside, Habitat is written in Rust, and is incredibly snappy. Take a look at the rudimentary comparison to kubectl below. Habitat is an order of magnitude quicker.

$ time hab -h 1>/dev/null real 0m0.011s

user 0m0.004s

sys 0m0.003s $ time kubectl -h 1>/dev/null real 0m0.131s

user 0m0.108s

sys 0m0.019s

Ok… so how do I package this Rails application?

hab studio enter

build

hab export pkg docker devigned/rails-todo

hab studio enter creates and opens a Habitat clean room (Docker container) for building your application. It provides an environment with all of the tools you need to build and package your application.

creates and opens a Habitat clean room (Docker container) for building your application. It provides an environment with all of the tools you need to build and package your application. build builds the Habitat package, a signed .hart file, which is really just a tar.gz

builds the Habitat package, a signed file, which is really just a hab export pkg docker devigned/rails-todo exports the .hart package to a docker image.

After exiting out of Habitat studio, and running docker image --format "table {{ .Repository }}\t{{ .Tag }}" , you should see the following.

$ docker images --format "table {{ .Repository }}\t{{ .Tag }}"

REPOSITORY TAG

devigned/rails-todo 0.1.0-20170531150240

devigned/rails-todo latest

habitat-docker-registry.bintray.io/studio 0.24.1

Wat!? Habitat just built me a Docker image. Dope!

That seems too easy… What just happened?

Remember the ./habitat directory in the root of the repo? Well, that directory holds the details of how we will package the Rails application in ./src . Here’s a listing of the ./habitat directory.

$ tree ./habitat/

./habitat/

├── config

│ ├── mongoid.yml

│ └── secrets.yml

├── default.toml

├── hooks

│ ├── init

│ └── run

└── plan.sh

The main file is the ./habitat/plan.sh , which describes the packages the application depends upon (lines 8–21), what ports the application exposes (lines 22–23), how to build and install the application (lines 40–53) as well as other pertinent pieces of metadata about the application.

./habitat/plan.sh

The config directory contains handlebars templated configuration files, which will have values replaced based on values in the ./habitat/config/default.toml file. For example, ./habitat/config/mongoid.yml will have the {{cfg.mongodb_uri}} (line 4) replaced with some_connection_string from ./habitat/config/default.toml (line 2). Similarly, ./habitat/config/secrets.yml will have configuration values injected.

./habitat/config/mongoid.yml

./habitat/default.toml

The ./habitat/hooks directory contains scripts, which will be run by the Habitat supervisor at different lifecycles of the application. For this application, we’ll only be using two of the hooks, init and run . There are multiple other hooks ( file_updated , health_check , reload , suitability , reconfigure , etc.). You can find more information about hooks in the Habitat documentation.

The init hook runs when a Habitat topology starts. As you can see from the following gist, the init hook moves our application’s files into the running directory the application, /hab/svc/$pkg_name/staic .

./habitat/hooks/init

The run hook runs when one of the following conditions occurs:

The main topology starts, after the init hook has been called

hook has been called When a package is updated, after the init hook has been called

hook has been called When the package config changes, after the init hook has been called, but before a reconfigure hook is called

As you can see from the following gist, the run hook sets up our Rails environment variables, and kicks off the Rails server bound to the IP and port defined in our Habitat configuration default source, ./habitat/default.toml .

./habitat/hooks/run

Ok… we’ve deep dived enough into our Habitat configuration and packaging. Let’s deploy this application onto our Kubernetes cluster.

Deploying to Kubernetes on Azure

At this point, we have a Docker image container our application, and we have our Azure infrastructure provisioned. We just need to tag and push our image to our private repository, and then ask Kubernetes to run our image.

To push the image to our private registry, run the following commands.

push image to private repo

Once the image is pushed to the private registry, we are ready to deploy the image to our Kubernetes cluster.

deploy to Kubernetes

The above set of commands will deploy, load balance and scale out your Chef Habitat packaged Rails todo application on Azure Container Service running Kubernetes!

Leveling up your DevOps

Each of the posts in this progression series have focused on leveling up your DevOps practices (mostly focused on deployment and provisioning). In this post we have described an example of a container based immutable infrastructure deployment using Chef Habitat, Kubernetes and Docker, but we haven’t talked about the why. Why is this a progression from previous models (handcrafted pets, infrastructure as code, etc.) a progression in DevOps maturity?

As systems grow in complexity it becomes increasingly difficult to reason about the state of a system. In an already complex system, additional complexity is accumulated when you must account for the mutating state of each individual component of the system to fully understand of the state of the entire system. When running “pets” (machines with mutating state with long lifespans), we need to constantly be aware of the state of the pets, and ensure they don’t skew too far out of compliance, thus adding to the overall complexity of the system. When building infrastructure with code, one is able to define, test and rebuild infrastructure though automated processes, thus creating an environment where it’s easy to build and destroy infrastructure leading to less state. When building immutable infrastructure with containers, we are able to define, test, build and persist packaged infrastructure. The key here is the ability to reason about a stateless piece of packaged infrastructure. The packaged infrastructure is versioned, signed and will not mutate state. Pair immutable packages with the added benefit of speedy container deployment and teardown, and we further reduce the complexity of the system while making it easier to replace infrastructure components.

Another interesting aspect of Chef Habitat is that the package metadata acts similarly to how we manage code dependencies, and offers many of the same benefits. We can reason about upstream changes and rebuild / redeploy packages based on those changes. For example, this application takes a dependency on core/openssl . If there was an important upstream change to that package (perhaps, a vulnerability was found), then we could be alerted of such a change and redeploy with the latest patch. This isn’t limited to direct dependencies. It could include the transitive closure of all of the application’s dependencies (all the app’s dependencies as well as all of the dependency’s dependencies recursively). This helps us to better understand the state of our system in the context of an ever changing ecosystem of dependencies.

The impact of Chef Habitat package metadata provides a higher level abstraction to packaged infrastructure, which allows us to reason about more than just dependent packages. The package metadata enables us to expose promises (intentions). For example, our application promises to expose a service on port 3000. In a more complex topology, this could allow another infrastructure component to bind to this promise. In fact, there a whole theory behind this promise stuff.

Promise theory may be viewed as a logical and graph theoretical framework for understanding complex relationships in networks, where many constraints have to be met

With infrastructure acting like autonomous agents describing their intentions, the system can reason about topology and self-organize. It almost starts to sound like a biological system. A system that is able to reason about it’s own organization lowers the complexity for operators of the system by automating organization and constraint resolution.

At the end of the day, we need to grok the state of our already complex systems. Anything we can do to limit complexity and provide more clarity into the state of our systems is leveling up our operation awareness and maturity.