tapir is a library for describing HTTP endpoints. It combines Scala’s typesafety, with the declarative style and introspection capabilities known from Java annotation-based frameworks, such as Spring or JAX-RS.

By leveraging Scala’s flexibility and functional programming features, tapir relies only on Scala (no code generation or external DSLs) and is based on simple, immutable data structures. This is, in contrast to annotations, which are an interpreted, not-typesafe, non-composable mini-language embedded into Java.

tapir sticker by ImpurePics

At the same time, being a Scala library, the main goal of tapir is to be programmer friendly:

result in code readable also by people not familiar with the library

also by people not familiar with the library provide discoverable API through standard auto-complete

API through standard auto-complete use human-comprehensible types, which you are not afraid to write down

types, which you are not afraid to write down separate “business logic” from endpoint definition & documentation

“business logic” from endpoint definition & documentation be reasonably type safe

A tapir endpoint — which is only a description — can be interpreted as a server, client or documentation. tapir doesn’t include its own HTTP server or client. Instead, it leverages one of the existing implementations:

Akka HTTP or http4s for server

Akka HTTP, async-http-client or OkHttp for client (through sttp)

OpenAPI (Swagger) for documentation.

But, how do endpoint definitions look in practice? Let’s find out, exploring three endpoints:

getting a list of entities as json streaming data submitting multipart forms with text and binary parts

Making a deep dive, below you’ll find the endpoints described using tapir’s API, as plain Scala values. Without knowing the library (yet!):

do you suspect how the endpoints are intended to be used?

to be used? do the types give a hint on what kind of information each endpoint consumes and produces ?

? is the code readable , are the intentions of the writer of the code clearly communicated?

, are the intentions of the writer of the code clearly communicated? how does it compare to the meta-data that is expressible via Java annotations?

In the subsequent sections, we’ll go through the code of each endpoint. Together with the first one, we’ll make a crash-course of tapir’s API. If you’d rather prefer to see how to interpret the endpoint as a server or documentation, you can safely skim most of the next section.

If you’d like to read and explore code at the same time, all of the code covered is available on GitHub.

Endpoint 1: getting a list of books

Our running example will revolve around books. Each book will have an id, a title, author, year in which it was published and an optional cover image. The first endpoint will allow retrieving a list of books in the system, optionally filtered by publishing year, and optionally limited to a given number of results.

Here’s the model which we’ll be using:

The endpoint that we’d like to describe is:

GET /api/v1.0/books?year=...&limit=...

where year and limit are optional parameters, the first one being mapped to the Year value class, the second represented as an integer. The result should be a JSON representation of a List[Book] . Additionally, our API should signal any errors as JSON as well (e.g. when the given limit is negative), corresponding to the ErrorInfo case class.

How to describe such an endpoint? Remember that at this point we only want to capture the structure. We’ll start with an empty endpoint, and gradually refine our description. After each step, we’ll obtain a new, immutable description of a (partial) endpoint.

We’ll start by adding support for a non-standard type that we’ll want to map to in the query: Year . In tapir, custom types are supported by creating a Codec , which defines a bi-directional mapping between a raw type and the custom type:

Next, we’ll define the endpoint itself:

Each endpoint consists of inputs, error-outputs and outputs. The distinction between error and normal outputs is due to the fact that usually an API has different responses in both cases — and that’s supported as a first-class construct.

The first input (defined using the .in method) specifies the path of the endpoint: .in("api" / "v1.0" / "books") . This path is constant, and doesn’t map to any values in the HTTP request. The second and third inputs map to query parameter values in the URL: query[Option[Year]]("year") and query[Option[Int]]("limit") . Note that optional values are simply expressed using Scala’s Option s.

When there’s an error, the body will be a JSON mapping to the ErrorInfo class. By default, error-responses map to the 400 Bad Request status code, and successful responses map to the 200 OK status code. However, depending on the exact error being returned, we’d like to use varying status codes. Hence, we additionally use the statusCode output, which maps to an Int in the response; we do that only for errors, the success case will continue using the default status code.

Finally, using the .out method, we specify that upon success, the body should be a json, mapping to a List[Book] .

Inputs can be specified in any order, as well as interleaved with outputs. It is just for reading convenience that they are grouped here.

Inputs/outputs

All of the methods used in the example above — query , / , statusCode and jsonBody are defined in the tapir package, and are in scope thanks to the import statement. If you’d like to explore what kind of inputs/outputs are available, just type tapir. and let your IDE’s auto-complete guide you.

Each of these methods yields a value, which is an immutable description of an input/output.

For example, jsonBody[List[Book]] is a description of a body input or output, which will have the json content type and will be serialised or de-serialised to a list of books.

An important note here: to be able to use the json body, we’ll need to integrate with an existing json library. Here, we use Circe, with automatic generic derivation ( import io.circe.generic.auto._ ), as well as import tapir-Circe integration: Codec s which use Circe’s Encoder s/ Decoder s, using import tapir.json.circe._ .

The type

The type of the endpoint description captures what is needed for the endpoint to be interpreted as a server or a client. We need information on all the inputs and outputs which map to values in the request/response. The Endpoint class has four type parameters: one specifying the input type, the error output type, the output type and streaming requirements (which we’ll cover later). In our case, the type is:

The path input contributes no values to the input (as the path is constant), so it doesn’t influence the input type. However, the query inputs both contribute a single value: the year and the limit. Combined, we get a 2-tuple (Option[Year], Option[Int]) . The same for the outputs.

Refactoring the endpoint

Leveraging the fact that both the endpoint, the inputs and the outputs are immutable values, we can do some refactoring, which will make writing subsequent endpoints easier. First, we’ll note that all of the endpoints that we’ll define in our system will have the /api/v1.0 path prefix, and upon an error will return an ErrorInfo JSON (with a varying status code). To avoid duplication, we can create a custom base endpoint.

With tapir, we can use a very convenient and well-known technique of avoiding code duplication, which is called “extract value” or “extract method”. The same mechanism we use for “normal” code (because tapir code is normal code)!

Our base endpoint maps to no values from the HTTP request (hence the Unit as the first type parameter), but specifies the type of the error output. Note that this type isn’t final, and can also be extended by other endpoints.

As a second step of our refactoring, we’ll note that the books filter: by-year and limit, will be probably reused by other endpoints (such as searching for books by title, by author etc.). Moreover, we could use a more descriptive representation than a (Option[Year], Option[Int]) tuple. Hence, we’ll create a case class representing the filter:

and extract a value which will describe a books query input:

Note that we can combine multiple inputs not only by calling .in or .out on the endpoint description multiple times, but also by using .and (and the alias for paths: / ) on inputs themselves.

Moreover, tapir provides a convenience .mapTo method to map a tuple to a matching case class representation. In effect, we get a value of type EndpointInput[BooksQuery] , which describes a (composite) input mapping to a BooksQuery instance.

Final result

After the refactoring, here’s the first of the three inputs that we are going to define:

Some of the key features of this representation:

the endpoint is described as an immutable Scala value (a case class instance)

(a case class instance) re-usable parts of the endpoint description are extracted as values , which can be navigated to using the IDE

, which can be navigated to using the IDE reading the definition (in english) gives a good idea of what the endpoint is

(in english) gives a good idea of what the endpoint is the type of the endpoint gives precise (and readable) information on how the endpoint maps to HTTP requests and responses

Endpoint 2: streaming a book cover image

The second example will describe an endpoint for streaming book cover images. We’ll be using the same baseEndpoint we’ve defined before:

Unlike in the first endpoint, here we capture one segment of the path, which maps to the id of the book, for which to get the cover image. This is done using the path[UUID]("bookId") method.

Moreover, we specify that the output body will be an Akka Stream: Source[ByteString, Any] . That’s a special kind of input, as it not only influences the input/output type, but also the 4th type parameter of Endpoint : the streaming requirements. While other inputs/outputs can be interpreted by any client/server interpreter, streams are interpreter-specific.

Endpoints using Akka Streams streaming bodies can only be used when using the Akka HTTP server interpreter, or the Akka HTTP sttp client interpreter. Similarly, an endpoint using FS2 streams as the body can only be used with an http4s server interpreter, or a compatible sttp client interpreter.

Endpoint 3: submitting multipart forms

Finally, we’ll describe an endpoint for adding a book. To add a book, besides the book details such as author and title, we’ll also give the user a possibility to submit a cover image. This is possible using multipart form submissions.

To handle multipart forms, we’ll first define a case class containing all the information that users will submit via the form:

We’ll also secure our endpoint via a bearer token, which will be a string, but for readability we’ll use a type alias:

Finally, here’s the endpoint itself:

Again, we’re using the baseEndpoint . Here however, we’re using the .post method, as this endpoint corresponds to POST /api/v1.0/books . Another important feature is the auth.bearer input, which describes an Authorization header input, additionally marking it as a means of authentication (which is important when generating documentation).

But the nicest part is specifying that the input body of the endpoint should be a multipart form. We use the multipartBody[NewBook] method, which yields a description of an input (as always, an immutable value! — a case class instance), mapping title , year , authorName and authorCountry text parts to the appropriate case class values, and storing the optional cover binary part as a temporary file (represented as Java’s Path ).

And that’s it! Describing multipart forms is as easy as creating a case class with the appropriate fields.

Interpreting as a server

Describing endpoints is nice, but the descriptions are just values: they don’t do anything. Let’s change this and interpret our endpoints as a server, and later as documentation. Endpoints can also be interpreted as clients, which however isn’t covered in this article.

Before interpreting as a server, we need to first pick an underlying stack. Here, we’ll be using Akka HTTP and Future s to represent asynchronous and side-effecting computations. If you’d rather work with Task s, IO s etc., you should use the http4s interpreter.

We’ll be using a thread-unsafe, var-based “database” with a couple of books already in place:

To interpret an endpoint as a server, we’ll need to provide the business logic for each endpoint that we’ve defined. To do this, we’ll import an extension method to the Endpoint class, toRoute , from the tapir.server.akkahttp package. This methods accepts the business logic as a parameter, and returns an Akka HTTP Route as a result:

As we are in Akka-land, the business logic should return a Future . Moreover, we need to signal that either the error-output, or that the normal-output should be used. We do this using an Either : by convention, if the result is a Left , that signals an error. If the result is wrapped in a Right , this signals success.

Hence, for the getBooks endpoint, which has type:

Endpoint[BooksQuery, (StatusCode, ErrorInfo), List[Book], Nothing]

the corresponding type of the business logic that we need to provide is:

BooksQuery => Future[Either[(StatusCode, ErrorInfo), List[Book]]

Note, that everything is type-safe! The business logic accepts as parameters data which is extracted and parsed according to the description in the endpoint; moreover, it is required to return data of the specified type as well. The response is then serialised using the captured json Circe codecs.

We implement the business logic of the remaining two endpoints in a similar way:

We now have three completely regular Akka HTTP Route s. We can surround them with other directives, e.g. for logging or metrics, or manipulate in any other way just as we would with a route created using Akka HTTP’s API. Here, we’ll simply start a server with these routes on the 8080 port:

Interpreting as documentation

Thanks to the fact that our endpoint descriptions capture the whole structure of the endpoint, we can also automatically generate API documentation.

Moreover, we can enrich the endpoint descriptions with meta-data, such as textual clarifications on the functionality of the API or data examples. For instance, we might provide an example of a book instance that is returned by the get books endpoint (note the .example method invocation):

tapir contains an OpenAPI interpreter, which translates a list of Endpoint s to an instance of the OpenAPI class. OpenAPI is the root of a family of case classes, which directly model the OpenAPI specification.

Thanks to that approach, if there’s some feature of the specification that is not covered by tapir, and cannot be automatically generated, you can always add it to the generated documentation, by manipulating the returned instance. When modifying a deeply nested case class structure, projects such as quicklens might be very helpful!

We can later serialise the OpenAPI instance as yaml or json:

The generated yml can be exposed using Swagger’s UI by adding a couple of Akka HTTP routes to serve the static content (images, css, html) and the documentation itself. See the SwaggerUI class in the sources for details.

Summary

We’ve covered a lot of ground! Let’s summarise what we’ve seen so far.

Java frameworks use annotations to define the mapping between business logic methods and HTTP endpoints. This has three main benefits:

the HTTP meta-data is separated from the business logic annotations can be processed to generate documentation or a client the intention of the code is clearly communicated, that is, the code is not only easy to write, but also easy to read

On the other hand, we have Scala, which has a much more advanced type system, as well as a number of features allowing flexibility in defining abstractions.

tapir combines the best of those worlds. Instead of using a separate language for defining the meta-data (the language of annotations), we use the same language to express the meta-data and the business logic. After all, why should we use a different one? Specifying HTTP mappings isn’t such a special task, that it should require a separate language! This allows us to use the same abstraction mechanisms as we use for all other code — for example, extracting common functionality as values or methods.

Hence, we manage to maintain the declarativeness, separation of concerns and readability of Java’s annotations; at the same time, significantly improving the type-safety, composability and abstraction capabilities.

Remember that the descriptions are plain Scala values — instances of case classes. You can construct these case classes by hand, or pattern-match them to create your own interpreters. Describing HTTP endpoints isn’t a complex thing, and it shouldn’t require complex code. Let’s use simple code for simple problems!

Give tapir a try, run the example developed above, in case of questions consult the docs, head over to the gitter room or simply create an issue!