When I recently told a coworker that Rust has macros, his first reaction was that this was bad. Previously I would have had the same reaction, but a part of what learning Rust has taught me is that macros don’t need to be bad. This post exists to help explain why that is, by diving into what problems macros solve, with a brief look at their downsides as well. In other words, this post is not a technical deep dive on how macros work, but focuses on the use cases for macros, and doesn’t require much knowledge about Rust to follow.

Why Fear Macros?

Macros are a form of metaprogramming; that is, they are code that manipulates code. Metaprogramming has gotten a bad reputation because it is easy to implement and use in ways that are unhealthy for your code; examples of this could be #define in C, which can easily interact with regular code in unpredictable ways, or eval in JavaScript, which opens websites up to code injection attacks, to mention some.

State of Macros in Rust

Many of those problems can be solved with the right designs though, and macros open the door to reaching some goals that are near and dear to the Rustic way of programming:

Generating redundant or trivial code (a.k.a. boiler plate code) instead of letting the developer write it by hand.

(a.k.a. boiler plate code) instead of letting the developer write it by hand. Extending the language , for experimenting before new syntax is added proper, or filling gaps in the language.

, for experimenting before new syntax is added proper, or filling gaps in the language. Optimizing performance, by doing at compile-time what could be done at run-time.

To reach these goals, Rust includes two types of macros . They are known by a few different names (procedural, declarative, macro_rules , etc.), but I find those names quite confusing. Fortunately they aren’t too important, so I’ll just refer to the macro types as function-like and attribute-like.

The high-level reason to have two types is that they slot more easily into different situations: function-like macros are easy to include as a part of your regular control flow, where attribute-like macros are a better fit for generating code that doesn’t fit naturally in any particular flow. Otherwise their end results are much the same: the compiler erases the macro invocation during compilation, replacing it with the code that is generated from the macro, and finally compiling that together with the rest of your code. That the implementations of the two macro types are wildly different is not something we will concern ourselves with here though.

Motivation for Function-like Macros

A function-like macro can be invoked almost as a function. You can tell the difference with the ! :

let x = action(); // Function call let y = action!(); // Macro invocation

So why use a macro when you can use a function? It’s important to remember that function-like macros actually have nothing to do with functions; they have just been designed to look similar to functions to make them easier to use, but they could have been designed many other ways. So the comparison is not macros versus functions, but really, computation with and without the ability to change the source code. Let’s do some comparisons!

Helpful Assertions

We’ll start out easy with assert! , which is used to verify that some condition is true, panicking if it is not. Given that the assertion has to happen at run-time, what benefit does metaprogramming buy us? Let’s look at the message that gets printed when assert! fails:

fn main() { let mut vec = Vec::new(); // Create empty vector vec.push(1); // Push an element into the vector assert!(vec.is_empty()) // Vector isn't actually empty, so assert! fails // Prints: // thread 'main' panicked at 'assertion failed: vec.is_empty()', src\main.rs:4 }

The message contains the actual condition that we are asserting! In other words, the macro creates the panic message based on the source code, and we get an informative error without having to write one ourselves.

Type Safe String Formatting

It’s common for many programming languages to have a small string formatting language embedded in some form , as such a language adds a load of maintainability when dealing with strings. Rust is no different, and format! is the Rust-take on such a language. But the question is again, why should we use metaprogramming to solve this problem? Let’s see it in action by looking at println! (which uses format! to handle its input ):

fn main() { // Plain input println!("{} is {} in binary", 2, 10); // Prints: 2 is 10 in binary // Numbered arguments and applying the binary formatter println!("{0} is {0:b} in binary", 3) // Prints: 3 is 11 in binary }

There are many reasons for format! to be implemented as a macro , but the key trick I want to highlight is that it can break the string apart at compile-time, analyze it, and check the given inputs to see if it is type safe. In other words, we can change the examples and make them fail to compile:

fn main() { println!("{} is {} in binary", 2/*, 10*/); // Compilation error; expected two arguments but only found one println!("{0} is {0:b} in binary", "3") // Compilation error; the binary formatter is not implemented for strings }

In many other languages, these errors would have appeared at run-time instead. But in Rust we can use macros to move the cost of type checking this otherwise foreign language to compile-time, and generate efficient code for formatting without run-time checks.

Logging with Zero Cost Abstractions

For the last example for function-like macros, we’ll dive into the ecosystem a bit. Here Rust has the log crate as the primary logging front-end; like many other logging solutions, it exposes different levels of logging, but unlike other solutions, these levels are exposed as macros and not functions.

The reason I think logging demonstrate a lot of the power of metaprogramming, is the way it uses the macros file! and line! ; these give an efficient way to pinpoint the exact source code placement of e.g. a logging call. Let’s look at some code to see what I mean; since the log crate is only a logging front-end, let’s add the flexi_logger crate as our back-end, to collect and print the logs.

#[macro_use] extern crate log; extern crate flexi_logger; use flexi_logger::{Logger, LogSpecification, LevelFilter}; fn main() { // Hard code the trace level as our minimum logging level let log_config = LogSpecification::default(LevelFilter::Trace).build(); Logger::with(log_config) .format(flexi_logger::opt_format) // Specify how we want the logs formatted .start() .unwrap(); // Logging is ready. Let's use it to debug our complex algorithm info!("Fired up and ready!"); complex_algorithm() } fn complex_algorithm() { debug!("Running complex algorithm."); for x in 0..3 { let y = x * 2; trace!("Step {} gives result {}", x, y) } }

Which will print this when run:

[2018-01-25 14:48:42.416680 +01:00] INFO [src\main.rs:16] Fired up and ready! [2018-01-25 14:48:42.418680 +01:00] DEBUG [src\main.rs:22] Running complex algorithm. [2018-01-25 14:48:42.418680 +01:00] TRACE [src\main.rs:25] Step 0 gives result 0 [2018-01-25 14:48:42.418680 +01:00] TRACE [src\main.rs:25] Step 1 gives result 2 [2018-01-25 14:48:42.418680 +01:00] TRACE [src\main.rs:25] Step 2 gives result 4

See how our logs contain file names and line numbers? There are two reasons this is worth looking into:

We get this information with zero run-time cost for collecting the data. The data is correct and useful.

For #1, the compiler inserts this information into strings embedded in our binary, which we can print. If we didn’t have a compile-time solution for this, we would probably have to resort to consulting our stack trace at run-time, which is much more error prone and costly for performance.

And to see what I mean by #2, consider if we changed the logging macros to functions, which still call file! and line! internally:

fn info(input: String) { // Contrived version of info! Log::log( logger(), RecordBuilder::new() .args(input) .file(Some(file!())) .line(Some(line!())) .build() ) }

If we tried to use this function in our previous example, the output would be something like:

[2018-01-25 14:48:42.416680 +01:00] INFO [src\loggers\info.rs:7] Fired up and ready!

Both file name and line number are useless, because they refer to the file and line of the logging function. In other words, the original example works precisely because we use a macro; the macro is replaced with the code it generates, putting file! and line! directly into our own source code, giving us the information we expect to get.

Motivation for Attribute-like Macros

Rust includes a concept called attributes, which is a way of annotating items in the code for different effects. For example, declaring that a function is a test looks like this:

#[test] // <- attribute fn my_test() { assert!(1 > 0) }

Running cargo test will then execute this function. Attribute-like macros allow you to build new attributes, which look like native attributes, but have their own effects. At this point in time, there is an important limitation though: only macros that use the built-in derive attribute work on the stable channel, with macros using fully custom attributes available on nightly builds; we’ll get into what the difference is below.

When looking at the benefits of attribute-like macros, the same addendum applies as before: to see the benefits, we have to compare code that can to code that cannot manipulate the source code.

Deriving Boiler Plate

The derive attribute is used in Rust to generate trait implementations for us. Let’s look at PartialEq as an example:

#[derive(PartialEq, Eq)] struct Data { content: u8 } fn main() { let data = Data { content: 2 }; assert!(data == Data { content: 2 }) }

Here we create a struct that we want to be able to check equality on (or use the == operator on, in other words), so we derive a PartialEq implementation to do so. We could have implemented PartialEq ourselves, but our implementation would have been trivial, because we just want to test the struct contents for equality:

impl PartialEq for Data { fn eq(&self, other: &Data) -> bool { self.content == other.content } }

This is more or less the code the compiler generates for us anyway, so deriving saves us some typing, but more importantly, it removes the maintenance burden we get from having to keep our struct definition in sync with our implementation. If we added another field to our struct, it would probably be important that we updated our implementation of PartialEq to take that field into account, or two instances of the struct could be declared equal when they are not.

Lifting this maintenance burden is a huge part of why macros matter; whenever we can derive an implementation, we’ve made our struct definition the single source of truth for that implementation, and we have a compile-time guarantee that our implementation is in sync with our struct. This also explains why serde is the go-to example for custom derive implementations; serde is used for serializing our data structures, and without macros we would have to use strings to tell serde the names of struct fields, and manually keep those strings in sync with the struct definition.

Derive With Benefits

The derive mechanism above is only a subset of the ability to generate any code you like with attribute-like macros, and not just trait implementations. This is the ability that is only available on nightly at the time of writing, but it should hopefully be stabilized in 2018.

The most prominent usage of this at the moment is probably Rocket, a framework for writing web servers. Rocket uses the single source of truth principle we saw above for handling REST services; creating a REST endpoint requires putting an attribute on a function, and the function now contains all the information needed to make it a working endpoint:

#[post("/user", data = "<new_user>")] fn new_user(admin: AdminUser, new_user: Form<User>) -> T { //... }

If you’ve worked with web frameworks in other languages (e.g. Flask or Spring), then this style is probably not new to you; I won’t do a nitty-gritty comparison to those frameworks here, but just emphasize that you can write similar code in Rust, while still maintaining all the benefits of Rust (performance, etc.).

Downsides

Macros aren’t all sunshine and roses of course, so I’ll just run through some of the issues that they bring in Rust. The first issue is compile time; since macros must both generate code during compilation, and that code must then be compiled afterwards, compile times will go up more with macros than if you stick to not using them. Similarly, because macros can easily be used as a structured way of copy-pasting code, you can also make the size of your binary go way up if you’re not attentive. This was an issue for the clap crate, where the author wrote a good blog post on how he discovered the issue and took the code on a diet.

Debugging also gets harder compared to normal code, because you have to debug generated code. Fortunately there are tools to help you if you get the need for debugging, but it is still early days. And while the Rust compiler will report errors in macro usages, it is really up to the macro authors to make these errors nice. Again, there’s some support for doing this (with compile_error! and crates like syn), but the quality is not consistent across all macros.

And finally there’s something a bit more subjective: DSL overload. We looked at format! which takes its input in the form of a small language that isn’t Rust; it is in fact a domain specific language, or DSL for short. And while DSLs are a powerful tool, it is easy to get overwhelmed by them if everyone is eager to create their own unique snowflake of a language. So if you’re considering writing a DSL, remember that with great power comes great responsibility, and just because you can make a DSL doesn’t necessarily mean you should.

Conclusion

Learning Rust has taught me that macros are in fact a powerful tool that can enhance many different aspects of our applications. I hope I’ve made the case for you too that macros are a net positive to have in Rust; if not, I’ve hopefully convinced you that they are at least powerful and they have their use cases.