When we started Lev two years ago, we faced a decision that companies and organizations of all stripes need to deal with — deployments. There is a seemingly infinite number of setups but, in our view, all options seem to fall short in one way or another. When putting together a wish list for our ideal automated deployment solution, here’s what we came up with:

Manage config files in one place for applications. (Environment specific files like wp-config.php, production.rb, ect.)

Securely maintain both our open source and our client’s deployments under one roof. No need to mess with a configuration files or connecting wires for each new project.

Deploy to any SSH-capable Linux server, including shared hosting environments.

Parallel multi-server delivery.

Repository-specific deploy keys. Ideal for Git providers that require unique deploy keys. In our case, GitHub and GitLab.

Post-deployment commands for building/compiling our applications right on the server. Useful for building docker images, compiling assets or restarting services.

Incredibly fast and secure file migration with only SSH (not relying on FTP or FTPS).

No server dependencies needed for deployment. Not even Git.

The ability to intelligently deploy while respecting the .gitignore file so nothing gets blown out.

So we took it upon ourselves to write a low level node.js script which handled Git post-receive payloads. After some tweaking it worked great. Success, right? Well, almost. We had achieved our goal of automated deployments, but the solution was cobbled together using too many tools and providers, and fell short of the all-in-one-place requirement. We needed to manually configure a post-receive hook in whatever repository we were deploying (GitHub or GitLab), and for post-commands we needed to create one-off bash scripts. Not ideal. We needed a user interface.

… So Stackahoy was born.