The Cloudinfo project has been a core part of the Pipeline platform since day one. We built and open sourced Cloudinfo as a unified interface that would allow us to access prices and services from all the cloud providers we support (AWS, Azure, Google, Alibaba and Oracle), resulting in a multi- and hybrid-cloud application platform. When a Pipeline user creates a single, hybrid- or multi-cloud Kubernetes cluster in one of the cloud providers or on-premise, they have the option of choosing between specifying their resource requirements (number of CPUs, memory, advanced network, I/O, etc) or simply selecting the cheapest cloud provider. This is accomplished through our cluster layout recommendation engine, Telescopes, which relies on service/price information provided by Cloudinfo.

The first version of the Cloudinfo API served us well, but as our platform grew and we added more features, we realized we needed something better. The rudimentary exact match filtering that REST provided was no longer enough; we needed a way to “query” data and build complex relationships between services across multiple providers, all while remaining cognizant of associated costs.

On the hunt for a query API 🔗︎

We began our investigation of web-based querying solutions by listing the features of an ideal query API:

It should be based on a standard, so clients could easily integrate with it.

It should allow for the writing of rich queries.

At the very least, it should be compatible with Go and JavaScript, our primary clients.

It should have an interface language, ideally, with code generation support.

We set a high bar, but that was because we wanted to find the best possible solution.

After our initial round of research, we came up with several candidates:

GraphQL

MongoDB’s query API

JQL (Jira Query Language)

SQL

You might be surprised to see SQL and JQL on this list, but both have features similar to what we had in mind. However, as we explored them as possible solutions, we realized they were not ideally suited for the web. Furthermore, their implementation would have required much more effort than we planned to invest.

Our next candidate was a custom API, based on MongoDB’s query API. It looked promising. It was compatible with Cloudinfo’s existing API, and allowed us to write rich queries, which meant checking another feature requirement off our list. We went ahead and implemented a proof of concept version of the API, but stopped almost immediately. Without the proper interface definitions and without code generation, both the server and the client-side implementation were painfully slow.

It was at that point that we started to consider GraphQL. GraphQL wasn’t our last choice, it was actually our first, but it lacked a few things (like operators in filters) that other options were able to provide, and we had decided to leave it until the very end. Without those missing features, GraphQL was the perfect candidate:

It’s standard.

It supports Go and JavaScript (and much more).

It has its own interface language and supports code generation.

After a little exploration, it was easy to settle on GraphQL as our Search API.

Note: we were already using GraphQL within the Pipeline platform; with our Istio operator we are building large, complex service meshes across hybrid-clouds, and we needed an easy and fast representation of these services in order to query or modify their edges.

Search API with GraphQL 🔗︎

It was relatively easy to describe our current API using GraphQL’s language, but, as previously mentioned, the language lacked support for things like operators. Without them, we were back at square one, with only exact match filtering - not ideal, since deciding where, when and on which cloud/instance-type to launch a service meant doing lots of advanced querying. After a bit of tinkering with the problem, we came up with the following solution:

1 type Query { 2 instanceTypes(provider: String !, service: String !, filter: InstanceTypeQueryInput ): [ InstanceType !]! 3 } 4 5 input InstanceTypeQueryInput { 6 price: FloatFilter 7 spotPrice: FloatFilter 8 spot: Boolean 9 cpu: FloatFilter 10 memory: FloatFilter 11 gpu: FloatFilter 12 } 13 14 input IntFilter { 15 lt: Int 16 lte: Int 17 gt: Int 18 gte: Int 19 eq: Int 20 ne: Int 21 in: [ Int !] 22 nin: [ Int !] 23 } 24 25 input FloatFilter { 26 lt: Float 27 lte: Float 28 gt: Float 29 gte: Float 30 eq: Float 31 ne: Float 32 in: [ Float !] 33 nin: [ Float !] 34 }

Although the definition may not look nice, using it is actually quite easy:

1 query { 2 instanceTypes ( 3 provider: "azure" 4 service : "pke" 5 filter : { 6 price : { lt : 0 .9 } 7 cpu: { eq : 4 } 8 memory: { gt : 4 } 9 } 10 ) { 11 name 12 price 13 cpu 14 memory 15 # other instance type fields 16 } 17 }

Writing that definition was surprisingly easy, which reassured us that GraphQL had indeed been the right choice.

After preparing our initial API definition, we moved on to choosing a server library. There are a number of GraphQL libraries for Go, but we ended up selecting gqlgen:

It’s actively maintained.

It supports code generation.

It’s easy to integrate into existing code.

It’s extensible with custom plugins.

Our job, then, was roughly as easy as implementing an interface:

1 type QueryResolver interface { 2 InstanceTypes ( ctx context . Context , provider string , service string , filter * cloudinfo . InstanceTypeQueryFilter ) ([] cloudinfo . InstanceType , error ) 3 }

The rest (serialization, routing) is handled by the library itself. Not bad, right?

GraphQL turned out to be the right choice for our Cloudinfo Search API. Clients can integrate with it easily, and it’s an easy to use query language. And thanks to gqlgen, we were able to generate a lot of code very quickly.

About Banzai Cloud Pipeline 🔗︎

Banzai Cloud’s Pipeline provides a platform for enterprises to develop, deploy, and scale container-based applications. It leverages best-of-breed cloud components, such as Kubernetes, to create a highly productive, yet flexible environment for developers and operations teams alike. Strong security measures — multiple authentication backends, fine-grained authorization, dynamic secret management, automated secure communications between components using TLS, vulnerability scans, static code analysis, CI/CD, and so on — are default features of the Pipeline platform.

Banzai Cloud changing how private clouds get built to simplify the development, deployment, and scaling of complex applications, bringing the full power of Kubernetes and Cloud Native technologies to developers and enterprises everywhere.

#multicloud #hybridcloud #BanzaiCloud