When using the word ‘ephemeral’, it’s hard not to think of Snapchat these days, however the concept applies to the on demand computing pattern we promote here at Iron.io with our task processing service IronWorker. At a glance, each Docker container in which a task runs is only alive for the duration of the process itself, providing a highly effective environment for powering applications that follow the microservices architectural style.



Long Live the Container

As Docker continues to spread through the industry by promising a standardized, encapsulated runtime across any environment, an entire ecosystem has emerged around containers from their orchestration to their hosting. We were early adopters with our initial use case, and continue to further leverage the technology through multi-cloud deployments and integrations.

While deploying distributed applications within a Dockerized framework is on the fast track to be the model of the future, a number of concerns around security, discovery, and failure have been introduced when approached from a production-ready mindset. Without digging into those topics too deep, let’s look at where Docker makes sense today, and why we’ve been so successful with it as a core component of our platform.

People have been surprised by our heavy use of Docker in production, however the nature of IronWorker lends itself well to the current state of Docker without as much worry for the drawbacks. That’s certainly not to say we haven’t had our own set of challenges, but we treat each task container as an ephemeral computing resource. Persistence, redundancy, availability – all the things we care so much about when building out our products at the service level, do not necessarily apply at the individual task container level. Our concern in that regard is essentially limited to ensuring runtime occurs when it’s supposed to, allowing us to be confident in our heavy use of Docker today.

To give a peek under the hood of IronWorker, we have of a number of base Docker images stored in block storage (EBS) that provide language/library environments for running code (15 stacks and counting). Users write and package their code with only the dependent libraries for the task and then upload to our file storage (S3). The IronWorker API allows users to run any task at a set concurrency level, either on demand or on a schedule. Tasks are placed in an internal queue (IronMQ) and then pulled by one of our many task execution servers.

These task execution servers, or “runners” as we like to call them, merge the selected base Docker image with the user’s code package in a fresh container, run the process, and then destroy the container. Rinse and repeat at massive scale. This streamlined process is very clean and fast, and we are continually working hard to tighten up even further by optimizing the task queue and improving the container startup time.

Dockerized Microservices

Wikipedia defines microservices as, “a software architecture design pattern, in which complex applications are composed of small, independent processes communicating with each other using language-agnostic APIs. These services are small, highly decoupled and focus on doing a small task.” This is in contrast to the monolithic approach where every component is embodied in a single and often cumbersome application framework.

While decoupling app components is not a new concept, microservices provide a more modern approach. What’s often missing from the discussion, though, is the computing environment. Where do these individual processes actually live and run? One of the key benefits of the microservices style is more streamlined orchestration at the individual service level, however scaling and orchestrating infrastructure can get expensive and complex as you separate more and more components if you’re not extra careful.

The ephemeral use of Docker described here applies to microservices as the concept is to have independently developed and deployed services that each follow a single responsibility. Whether it be sending emails and notifications, processing images, placing an order, posting to social media – these processes should run asynchronously outside of the immediate user response loop. This means they really don’t need to be hosted in the traditional sense, they only need to be triggered by an event and run on demand.

This is where IronWorker comes into play – aside from providing a workload-aware computing environment fit for any task, Iron.io handles all of the operations, provisioning, and processing of your microservices for you in a highly efficient and effective manner. This means that you can keep your focus on developing code, without having to worry about how to deploy, manage, and scale. As microservices evolves to be the pattern for building modern cloud applications, having a dynamic platform like IronWorker to handle the bulk of the work will be crucial throughout the entire development lifecycle.

The Next Word

Not every service is a microservice, and there’s still the topic of dealing with handling requests, state and inter-service communication. At the end of the day, a microservices application is meant to be a single application, and it must all come together in a unified manner. Stay tuned for the next post where we talk about those smart pipes.

To get started with IronWorker for free, sign up for an account today. Our containers may be ephemeral, but our service and support are lasting!

Find this interesting? Discuss on Hacker News