It used to be that the big beat the small—today the fast beat the slow. Fast teams keep their talent engaged, ship faster, and beat the competition to market. Microservices let you increase your engineering speed and agility.

Using microservices allowed SoundCloud to reduce a standard release cycle from 65 days all the way down to 16. The two diagrams below show before and after deploy cycle timelines.

Length of deploy cycle before microservices:

Length of deploy cycle after microservices:

How did they accomplish this? A microservices architecture allowed them to decouple blocking portions of the development workflow, clarify and isolate concerns, and focus on component-level changes.

With the rise of AI/machine learning, microservices are more important than ever. As teams adopt microservice-oriented architectures, often serving powerful ML models, they build better products faster, outpacing their competition.

What is a scalable microservice?

Services are small and serve a single function

Built on a scalable, serverless infrastructure: the service author does not need to worry about DevOps

Have continuous deployment and automated testing (ideally)

Are language agnostic: a data scientist can write their model in R, and it can be integrated into an app written in Scala with no issues

What is the advantage of Serverless Microservices?

Focus on code, not servers

In monolithic applications the developer has to keep the concept of the server in view at all times. The nature of using a contained microservice allows for various functions to be run in parallel without affecting performance. So long as each microservice is written to be computationally efficient, a developer doesn’t need to focus on the servers.

Write the code once, use it anywhere

Imagine working at a large news organization, and having the ongoing problem of image resizing across dozens of web properties. A resizing microservice would allow any engineer in the org to add the resizing service to the code they’re working on with a simple API call.

Language agnostic

Data scientists often work in Python or R, while the platform team may be in Scala or Java. In a monolith, each piece needs to be integrated together in the same language. Because microservices communicate via API it opens up the possibilities for interoperability. Various open-source and cloud-based systems are emerging to automate the DevOps, often called Function as a Service or FaaS.

Composable

Microservices can be easily used together, like ingredients in a recipe, to get the result you’re looking for. For example, you could have one microservice that grabs all of the photos on your site, runs each through a facial recognition microservice and a license plate recognition microservice, sends the photos that contain faces through a service that blurs out each face, then sends that photo back to your database to replace the uncensored image. Over time, your organization will build up a portfolio of favorite microservices that can easily be discovered and composed.

Cloud agnostic

Microservices can give you the freedom to use different mixes of cloud services. You may want to run your AI/ML models on one provider’s high-performance GPUs, while using a different provider for cost-efficient database hosting.

Launch incremental features

Developers and teams can work independently to develop, deploy, and scale their microservices—they can also push fixes for bugs without affecting the rest of the infrastructure. This greatly simplifies the orchestration of timelines by product and dev managers.

Compliments other cloud services

Modern apps take advantage of the wide array of cloud infrastructure that’s available today. Since these services are generally integrated via APIs, microservices fit right in.

The economics work well

Serverless microservices are only spun up when needed and all of the major hosting services only charge for what you use (often by the second or 100ms intervals). The savings can be massive—especially for AI/ML models that often have high compute needs that come in bursts.

How does a microservice work?

There are many options that you’ll have to customize to best suit your needs, but every microservice approach shares some commonalities:

Your code

Your code is stateless and will receive inputs via API gateway, run the code in the language of your choice, and return its results back via API.

Your infrastructure

Your infrastructure REALLY matters when it comes to microservices. There are serverless hosting options (see below)

which handle security, permissioning, GPU/CPU management, containerizing, language support, etc. There’s a lot to think about, and if you don’t wish to be doing a lot of heavy DevOps, we strongly recommend using a managed platform.

Your API endpoints

This is how your microservice application will receive data in, and will send data out. If you need a refresher on APIs, check this out.

Your discoverability/versioning

You’re going to need to build some infrastructure that allows your team to version APIs, keep track of who’s responsible for what, and that can index the various microservices.

Challenges for Serverless Microservices

The information barriers of your organization can be reflected in your serverless applications—services are spread out so you’ll need to use tools for discovery, versioning, and communication around each microservice. These tools are available, but require either building your own system or utilizing a platform built for this.

You’ll need to have clear communication on who’s responsible for each microservice.

Testing and deployment can be more difficult, but tools are emerging to automate these issues.

Microservice latencies are generally in milliseconds, but that’s still a bit slower than calls within a monolithic service process. You’ll gain many advantages by being able to run many microservices elastically in parallel—so the performance benefits outweigh the costs in apps of much complexity.

Serverless Hosting Providers

There are many general purpose cloud-based serverless hosting solutions from the big players. They work great for deploying traditional code on a ‘pay as you use’ basis. Interestingly, as we set out to build our AI marketplace we had a need 5 years ago to build a serverless microservice platform that was optimized for deploying AI/ML models. Algorithmia began running services even before AWS Lambda hit the market, and is optimized for the entire workflow of putting operational AI/ML models into continuous deployment.

Tutorials

Serverless Microservice Resources

Next Steps