Published: Sat 30 May 2015 In misc. tags: rust web programming

I wanted to create a website for a personal project. This is usually the great opportunity to learn - no time pressure, no external requirements, etc. That meant I could choose the language I wanted to try out in anger (Rust) and take it for a spin. Here’s a short summary of the experience.

The state of webdev in Rust

Rust environment has some support for web development, but it’s still very basic. It’s not a discovery - it’s well known fact, even documented on Are we web yet?. However unless you need a lot of pre-packaged components, you can already write some services.

I didn’t have huge requirements: a read-only database-backed website with some 2-page admin panel. The website is not complete yet, but I’ve done most of the pages and now it just needs more typing, not thinking. So here are the usual good, bad and ugly.

What worked well

The basics are there. I used the Iron framework, which provides the server part with routers, static file handling, taking care of connections, etc. It’s an ecosystem of its own. Jonathan Reem manages most of the github bits with at least one contribution every day for the last year… Impressive! Most of the useful Iron elements are in his repos and now it looks like HTTP2 parser is on the way.

The router does its job. It supports getting values from the URL, it can be composed (that is, both the Router and the Chain are Handler s). It doesn’t support regex matching or casting parameters to the right type, but it’s functional. Same goes for logging and static file handling. No thrills, they work.

There aren’t that many template libraries to choose from yet, but handlebars-iron does the job.

The feeling that when the code compiles, it will not explode at runtime with some silly error is really really good. Actually the compiler checks cover most of the things I’d normally unit-test, so this is probably the only non-trivial project I wrote without checks and I’m OK with that. The only things I’d like to unit-test would probably get a package of their own and not stay in the webapp itself.

The code seems quite compact. There are some parts which are verbose and they’re described later. But ~500 LOC include all initialisation and config, DB entities and operations, around 7 routes, verifying arguments and passing them to templates. That’s about as much as I’d expect from similar project in Python.

What didn’t work very well

API restrictions, lifetimes, …

Some things are just harder in Rust. There were moments where I wanted to do basic refactoring and the design of the libraries was simply against me. I realise that happens for a good reason typically, but there are also really odd cases.

One of those is described in an r2d2 issue - unfortunately it’s not possible to return the connection from a function without either creating a new type which will also keep a ref-counted connection manager, or mut-borrowing the whole request. Of course the latter prevents getting other things from the request… like URL parameters. Issues like that suddenly throw a spanner in the works and leave you analysing lifetimes, browsing docs, trying to figure out if you’re wrong or is the library design really not compatible with what you’re trying to do.

On the other hand you end up learning a lot about lifetimes and borrows in practice…

Passing data in/out

Another bad part is Rust’s JSON handling. It badly needs macros which make things easier. Using standard types results in things like:

let mut data = BTreeMap :: new (); data . insert ( "events" . to_string (), events . to_json ()); let page = Template :: new ( "events_page" , data );

when I really want it to be only something like:

let data = ! json_obj { "events", events } ; let page = Template :: new ( "events_page" , data );

Fortunately the ToJson trait handles creating more complex objects and Vec<Event> in this case could serialise itself.

Update: Apparently there’s maplit - while it’s designed for hashmaps / btreemaps and ToJson can’t use &str keys unfortunately, it’s still an improvement. Using those macros, the following works: (also gets rid of the only mut in the handlers - yay!)

let data = btreemap!{ "events".to_string() => events.to_json(), };

Compile times

Finally… the iteration time is just bad. It doesn’t matter if you’re writing some bigger piece of functionality, but when trying to solve some tricky compile error or just experimenting, waiting for more than 5 seconds is going to bother at first and really irritate after the 3rd try. A tiny project with lots of dependencies (516 LOC, 63 dependency crates) takes 13 seconds to compile - and that’s in debug mode, without optimisations.

After a while I started to recognise which phase of compilation failed based on time-to-error (unknown names, wrong signatures, borrows, lifetimes, warnings) and that after 3 seconds or after any warning showed up, it’s only LLVM / linker / optimiser running and there will be no error anymore.

What’s just ugly

Verbose parts

Compared to many static languages, the handlers look tidy. Compared to dynamic languages, they’re terrible. Starting with how to get a numeric id out of the routed URL: (unwraps are safe here - if they fail that’s Router ‘s implementation issue, not bad data)

let id_str = req . extensions . get ::< Router >() . unwrap () . find ( "id" ) . unwrap (); let id = match id_str . parse ::< i32 >() { Err(_) => return not_found(), Ok(result) => result } ;

And sure, this could be something like:

let id = try !( get_url_parameter ::< i32 >(& req , "id" ));

But unless you write that function, it isn’t. Same goes for the database connection mentioned earlier, which could be a macro, but cannot be a function, or not easily anyway:

let pool = req . get ::< Read < Database >>() . ok () . expect ( "database component not initialised" ); let connection = pool . get () . unwrap ();

These really needs to be more developer-friendly before people start using it daily.

Weird interfaces

Some interfaces need to be easier for developers before web development in Rust becomes more common. Figuring out a plugin architecture based on compile-time hashmap using types with associated values can be complicated. If your goal is just “get me the URL parameter”, then it’s needlessly annoying. It’s great that it works like this under the covers, but I don’t need to know about it.

Import avalanche

When working with many third party components, which is very common in webdev, the declarations on the top of the file can get rather long. For example the main file in my project contains just the initialisation and route handlers (all database operations, entities, helper functions, etc. are in other modules), yet it still has 30 extern/mod/use lines at the top. With line breaks and comments that takes over one full screen. And that’s when using deduplicated

use ...::{Something,Other,...}

matches on a single line.

Not the end of the world, but slightly annoying.

What’s been observed

I don’t know if these can be classified as good / bad, but they do give me a nice feeling.

Option

APIs usually handle Option<...> nicely. In Json, in database connectors, in templates, it just works where it should. That means there’s rarely some special casing involved - if you’re have some related table in the database which may or may not have an entry you’re interested in, it’s probably going to be an Option<Entry> in your handler code.

That’s good, because you won’t see a ladder of special cases checking if you have something or not. On the other hand, you need to quickly learn to quickly write/read lines of .and_then() , .err_map() and others. While they were new to me, I quite like this approach actually. For example here:

let event_id = event_name .and_then({|name| Some(get_or_create_event(&connection, &name)) }) .and_then({|event| Some(event.event_id) });

Variable event_id will go directly to the template and I don’t care if it event_name existed, if it had a matching event, etc. Everything’s going to be fine. Even the postgres connector can translate those to a NULL where needed.

This is very different from the guessing game of “does this function handle null / nil / None properly” in many other frameworks.

No ORM

There’s no big ORM in Rust yet. There’s r2d2 for connection pools which is very welcome. There are also fairly standard database connectors. But that’s about it. And actually, I don’t mind that much. You can write macros for mini-ORM (just basic SELECT, INSERT) and handle everything else via SQL. Traits like From<> help a lot, because you can just implement

impl<'a> From<&'a Row<'a>> for Event {

for your types. Then reading them back from results is only:

rows.iter().map(|row| Event::from(&row)).collect()

It’s also really easy to make it generic or throw into a macro if it repeats too many times.

Update: I’ve been directed to deuterium - never used it, but it looks like a nice postgres query builder and ORM.

Exposing to public

Iron/Hyper are not yet ready to take internet traffic directly. Since Rust doesn’t have a nonblocking IO available in a stable form yet, you should put the server behind something that can handle a slow-connection-DoS - for example Nginx.

Summary

It may look like I listed a lot more negatives than positives, but that’s just because it’s harder to talk about good things when they’re expected. I enjoyed the experience and if some other personal project comes up, I think I’ll use Rust again (instead of Python/Flask as usual).

If the project gets bigger, many things will have to be implemented - logging to external collectors, forwarding detailed errors, reporting processing / query times, application/schema migration control, etc. But that’s still in the future. Today, it’s a small, lean project and Iron fulfills all the needs.