When we look at the market, we can generally describe the ways applications are currently designed and configured.

Most of the enterprise applications that run in production today are monolithic applications.

Such applications are managed by configuration tools such as puppet, Chef and Ansible, monitored by a variety of APM tools, use logging collectors like splunk and nagios, and follow SOA design principles.

The advantages of running microservices — scale, agility, efficiency — drove their widespread adoption, and the popularization of Kubernetes distros (mainly OpenShift), Cloud Foundry and Mesos. Often, enterprises spin off a dedicated team to lead their initiative to move to microservices architecture. This team uses a microservice architecture to deploy green-field applications, while existing applications keep running as monoliths and are managed by separate teams. The tooling employed to manage and deploy distributed microservices applications are different as well. These applications are monitored by microservice-oriented metrics tools like prometheus and grafana, which emphasize scale and microservice discovery. Logs are collected and processed as transactional logs by tools that implement open tracing standard.

With the emergence of the Serverless Computing paradigm, enterprises are adding serverless teams charged with writing serverless applications for AWS Lambda, Google Cloud Functions, or Azure Functions. The toolset for serverless applications is provided by the cloud vendor: at AWS, for example, serverless applications are monitored by Cloudwatch and logs are collected by X-Ray. Additionally complicating the transition to the serverless paradigm is the fact that serverless applications typically follow an event-driven architecture, unlike microservices and monoliths which are driven by request-response.

Because the 3 co-existing architectures have separate methodologies and different tooling, they are typically managed by different groups, forcing IT teams to work in silos. The transition between architectures is extremely hard, and often results in partial or full rewrites of existing applications, which is expensive, risky, and halts the addition of business value while the rewrite takes place.

The problem is clear. The transition from legacy to modern architectures comes at a tremendous price and carries tremendous risk. We at Solo propose a new way of composing applications from existing monoliths while leveraging the advantages of microservice and serverless architectures without a complete rewrite. Functionality and features can be gradually migrated to newer architectures piece-by-piece without interrupting the delivery of business value.

The common bond that ties these three technologies together is the function. What do we mean by function? Service-oriented applications are, at their core, collections of request-response transactions, which we can think of as functions. These may be API calls such as those found in REST and SOAP in monolithic and microservice applications, or functions in the serverless world. By breaking down applications into their constituent functions, we can compose new applications — or refactor and extend existing ones — function-by-function.

There are two challenges to this approach: discovering the functions that compose services, and controlling the routing of client requests so they reach the desired function.

Once we are able to route to any API or serverless function, we can compose new applications by mixing and matching back-end services.

We call such composite applications, which consist of functions from different architectures, “hybrid apps”.

We built Gloo as a framework for composing hybrid apps

Gloo automatically discovers all available functions from a variety of backends (currently supporting gRPC, Swagger, Lambda and Google functions) and allows users to compose hybrid apps with its simple yet powerful configuration language. Gloo routes directly to functions based on the user configuration, allowing the creation of hybrid apps through simple configuration.

How Gloo Works

Let’s dive into the Gloo implementation and understand how it helps us achieve and support the hybrid app use case.

Gloo is built on top of Envoy proxy. Existing proxies support routing on the service level. That means they can match a route to an IP or a hostname. Proxies today cannot make routing decisions based on the specific functions they are routing to. However, Gloo extends Envoy to support routing at the level of the function. This enables the suite of Envoy features such as fault injection, and traffic shifting to work at the function level.

In building Gloo, we took full advantage of Envoy’s power and extensibility. Many of Gloo’s features were only made possible by writing custom filters for Envoy. We have already released several filters, including AWS Lambda filter, Google Functions Filter, and Request and Response Transformation filter.

Gloo’s logic, internally, is very simple. Its intimate knowledge of back-end functionality comes from its plugin-based configuration language. This architecture makes Gloo easily extendable, as new features and support can be added simply by the addition of new plugins.

For building and deploying customized builds of both Gloo and Envoy, we provide TheTool. This allows developers to easily extend both Gloo and Envoy without having to learn the build and deployment process. This also enables users to build minimal images, only compiling the specific features they desire.

Gloo functions out-of-the-box as as Kubernetes ingress. More integrations will be released soon.

We invite you to go ahead and give Gloo a try. Make your own hybrid app, or have some fun gluing functions across clouds. To get started, watch our video example here, and check out the walkthrough here.