Perhaps the singular most important choice an API developer can make is one of programming language and architecture. Choosing how an API will communicate with consumers, how security will be implemented, and how the API will function with other services is largely constrained by the language and methodology by which it is developed.

In this piece, we’ll discuss a Java framework called Spark, its basic use, history, and compare it with other languages and toolkits. We’ll highlight what makes Java Spark an incredibly functional and useful toolkit for crafting APIs, and provide some examples. In upcoming posts, we will also show you how you can use this framework in Scala and Kotlin, two other languages that run on the Java Virtual Machine (JVM).

Introducing Spark

Spark is a Free and Open Source Software (FOSS) application framework written in Java. Not to be confused with Apache Spark, this toolkit is designed to make it easy and fast to create APIs. It is a lightweight library that you link into your application to start serving up data. It allows you to define routes and dispatch them to functions that will respond when those paths are requested. This is done in code using a straight-forward interface, without the need for XML configuration files or annotations like some other web frameworks require.

It was first created in 2011 by Per Wendel, and version 2.0 was released in 2014 to take advantage of new language features in Java 8. Since inception, it has aimed to facilitate rapid development. According to a 2015 survey, 50% of Spark users utilized the toolkit to develop scalable REST APIs.

A simple example looks like the following prototypical snippet:

import static spark.Spark.*; public class HelloWorld { public static void main(String[] args) { get("/hello", (request, response) -> "Hello World"); } }

When this main method is executed, Spark will fire up a Web server, and you can immediately hit it like this:

curl "http://localhost:4567/hello"

When you do, the lambda function given to the statically imported spark.Spark.get method will fire. The output of the lambda is what the client will be served up (i.e., the string Hello World ). The routing can get a bit more complicated, but that is the basic idea: you specify a path that gets dispatched to a certain function based on the URI, HTTP action, and accept headers.

More Complicated Example

Below is an example from Spark’s GitHub repository that is slightly more complicated and shows off some of the tookit’s other features:

import static spark.Spark.*; public class SimpleExample { public static void main(String[] args) { get("/hello", (request, response) -> "Hello World!"); post("/hello", (request, response) -> "Hello World: " + request.body()); get("/private", (request, response) -> { response.status(401); return "Go Away!!!"; }); get("/users/:name", (request, response) -> "Selected user: " + request.params(":name")); get("/news/:section", (request, response) -> { response.type("text/xml"); return " " + request.params("section") + " "; }); get("/protected", (request, response) -> { halt(403, "I don't think so!!!"); return null; }); get("/redirect", (request, response) -> { response.redirect("/news/world"); return null; }); } }

Let’s break down the functionality piece by piece to see how else Spark can help with dispatching in your API. First, we have the static response for the hello endpoint (i.e. http://localhost:4567/hello ) and the slightly more dynamic POST handler:

get("/hello", (request, response) -> "Hello World!"); post("/hello", (request, response) -> "Hello World: " + request.body());

Later, we’ll compare this snippet to one written in Go, but for now notice that these two method calls cause Spark to route messages to the hello URL when made as an HTTP GET or POST request. The latter isn’t much more complicated than the previous one — it just appends the request’s contents to the string Hello World: and flushes that to the response.

Next, we have our API’s more guarded routes:

get("/private", (request, response) -> { response.status(401); return "Go Away!!!"; }); // ... get("/protected", (request, response) -> { halt(403, "I don't think so!!!"); return null; });

In these cases, a visit to the /private or /protected endpoints will respond with a failure status of 401 or 403, respectively. In the body of this response, we’ll print either the value returned from the lambdas, Go Away!!! or nothing. The only difference between these two is that the former sets the status code explicitly using the Response object while the latter uses Spark’s halt function. This is done to allow access to the private sub-URL when access is permitted; halt , on the other hand, immediately stops the request within the filer or route used, terminating the request.

Then, we have this interesting route:

get("/users/:name", (request, response) -> "Selected user: " + request.params(":name"));

In their most basic form, these two functions allow a user to login and visit a page which serves news items. The get request for /users/:name allows a client to provide a username (e.g., by requesting http://localhost:4567/users/bob ) to login as a specified user, who may have certain settings and preferences saved on the server. It will also serve text reading Selected user: bob . The username, bob , is fetched from the URL using Spark’s param method on the Request object, allowing you to build dynamic URLs and get their values at run-time.

Then, we have a route that changes the response type:

get("/news/:section", (request, response) -> { response.type("text/xml"); return " " + request.params("section") + " "; });

This snippet also uses the params method. It also shows how the response type can be changed using the Response object’s type method. When the user calls an HTTP GET on the /news endpoint for some section (such as http://localhost:4567/news/US ), the API will return an XML document. To ensure that the user agent handles it properly by setting the Content-Type HTTP header with Spark’s type method.

The last route shows how we can use Spark’s Response object to create a 302 redirect from within the lambda that gets dispatched when the /redirect endpoint is hit:

get("/redirect", (request, response) -> { response.redirect("/news/world"); return null; });

Templatized Responses

Spark can also render views using a half a dozen templating engines, including Velocity, Freemarker, and Mustache. With a templated response, you wire up the route a little different than in the examples above. Here is a basic sample using Velocity:

public static void main(String args[]) { get("/hello", (request, response) -> { Map model = new HashMap<>(); Map data = new HashMap<>(); data.put("message", "Hello Velocity World"); data.put("att2", "Another attribute just to make sure it really works"); model.put("data", data); model.put("title", "Example 07"); return new ModelAndView(model, "hello.vm"); }, new VelocityTemplateEngine()); }

In this sample, we route the path /hello to a lambda function (as before). This time though, we also pass in a new third parameter to the get method: a VelocityTemplateEngine object. This will produce the final output from a ModelAndView object that our lambda returns. This model contains a Map of data that will be used in the template, and the name of the template to use. This causes the model data and the following template to be renders:

$title Example 07 Velocity Example Variables given from controller in the model: #foreach ($e in $data.entrySet()) $e.key $e.value #end

Spark Compared

Let’s look at another, admittedly contrived, example and compare it to the syntax of Google Go, which we recently described in another walk through. Let’s pull out those two routes from the last example where we varied the response based on the HTTP method used. With this, we have not only the GET method routed, but also POST which appends the request’s body to the response:

import static spark.Spark.*; public class HelloWorld2 { public static void main(String[] args) { get("/hello", (request, response) -> "Hello World!"); post("/hello", (request, response) -> "Hello World: " + request.body()); } }

This is an extremely simple service, but complexity isn’t the point, so bear with us. This time, when a user connects to the endpoint on localhost using an HTTP GET or POST (i.e. http://localhost:4567/hello ), Spark will respond; otherwise, they’ll receive an error status code of 404, not found . With Spark, the code is very concise — a quality that cannot be touted enough. Not only will this make it easier to debug, it will make it easier to maintain and evolve. To underscore this, let’s compare this to a similar example in Golang using mux:

import ( "io" "io/ioutil" "net/http" "github.com/gorilla/mux" ) func main() { r := mux.NewRouter() r.HandleFunc("/hello", func (w http.ResponseWriter, r *http.Request) { io.WriteString(w, "Hello, World!") }).Methods("GET") r.HandleFunc("/hello", func (w http.ResponseWriter, r *http.Request) { body, _ := ioutil.ReadAll(r.Body) io.WriteString(w, "Hello world: ") w.Write(body) }).Methods("POST") http.ListenAndServe(":4567", r) }

Looking at the two, which are identical in functionality, the Go version is certainly more code. They are both simple, but the real difference is in the readability. The latter example in Golang uses a syntax that might be cryptic and foreign to some programmers. On the other hand, the Spark example simply organizes a “request, response” relationship using a Domain Specific Language designed for this exact purpose. For this basic example, we don’t really need to use mux. Go has built-in functionality that would be much less code if we also dropped the use of Spark from the Java example. When pitting the two languages against each other, sans toolkits, the Go example is far less complex. The point though is that Spark gives you a fluid way of setting up routes that is very approachable — even for a novice. In an upcoming post, we’ll also show you how you scale the use of this toolkit as your API’s codebase grows.

Benefits of Spark

Apparent from even these relatively simple examples, we’ve discovered that Spark has some great benefits when building APIs, including:

Speed : Spark is a thin wrapper around Java EE’s Servlet API, which ranks very high on industry benchmarks where not only Java is tested but many programming languages and toolkits.

: Spark is a thin wrapper around Java EE’s Servlet API, which ranks very high on industry benchmarks where not only Java is tested but many programming languages and toolkits. Productivity : Spark aims to make you more productive, giving you a simple DSL for routing your API’s endpoints to handlers.

: Spark aims to make you more productive, giving you a simple DSL for routing your API’s endpoints to handlers. Purpose Built : Spark does not come with any bells or whistles. It is designed to do one thing and one thing very well — routing.

: Spark does not come with any bells or whistles. It is designed to do one thing and one thing very well — routing. Cloud Ready: Spark is a great, lightweight alternative to other, heavier frameworks, making it perfect for applications throughout the entire cloud stack.

Utilizing Java’s familiar syntax and raw power is definitely an effective choice when launching your API. When you combine this with the benefits of Spark, you get a powerful pair that will make you more productive. Later this week, we will publish another post to build on our knowledge of the Kotlin programming language, and show how to use this JVM programming language with Spark. We’ll use that language to extend these basic examples to include Controllers and Dependency Injection (DI). In another upcoming post, we will also explore the use of Spark with Scala, so be sure to sign up for the newsletter and catch those as soon as they’re out.

Resources

In the meantime, to learn more about Spark, checkout these online resources and examples:

Read up on these, and try out Spark today. Feel free to let us know in a comment below how it turns out.