Rulebooks still suck. Here’s how we do better.

By Joshua Yearsley (@joshuayearsley)

Board games are growing bigger and bigger, but growth brings growing pains. This year, the Spiel des Jahres, basically the Oscars of board games, put out this statement:

Unfortunately, we are increasingly under the impression that ever more very good games are being hastily put together at the last minute in order to meet release deadlines, without sufficient attention being paid to the comprehensibility and completeness of their rulebooks. We have never had to rule out so many in and of themselves very good games as this year, simply because their rulebooks did not meet the quality we expect. We jury members no longer wish to see ourselves in the role of beta-testers for rulebooks, which are only made adequate on the second printing run.

Their concerns are true. Many, many times, a publisher has approached me and said, “Help! Can you edit this rulebook? We need to get it to print in two weeks!” If you’ve done this, don’t worry—basically everyone does. So we all need to get better here.

But it’s not enough to just plan more time for editing. What we do with that time—our quality-control practices—that’s what we’ve got to improve, and that’s what I’ll be talking about here. In this post, I’ll give three better practices for producing board-game rulebooks and components. In each, I’ll start by showing how my assumptions about process have changed through my experience in writing, editing, and developing board games. I hope through this we can all get better together.

Better Practice #1: Specialize Your Quality Control

Old Assumption: I’m careful and detail-oriented enough to not make mistakes, even if I’m rewriting the rulebook. Wrong!

I’m careful and detail-oriented enough to not make mistakes, even if I’m rewriting the rulebook. Wrong! Observation: If I’m doing major, substantive editing, I’m basically a second writer.

If I’m doing major, substantive editing, I’m basically a second writer. New Assumption: If my work looks like writer’s work, then I’ll need a person to check me too.

In traditional publishing, a non-fiction book will have two or three (or sometimes four) separate editors working on it: a developmental editor involved in broad structure and content, a line editor involved in the line-by-line flow and clarity, a copy editor involved in style consistency and correctness, and a proofreader involved in cleaning up any remaining typos or other errors. That’s not counting the possible fact-checker involved in, well, checking facts.

Why so many different roles? Because there are just too many different facets of what makes “a good book” for one flawed human to pay attention to, and because different people are good at different things—a developmental editor does very different work from a proofreader.

Over the years I’ve noticed that, often, I’ve done my best rulebook work when the publisher has a highly technical, knowledgeable person in-house to partner with me and check my assumptions, and vice versa. This practice is similar to pair programming and code review in the software industry—computer code and rulebooks (basically code for humans) are just so complex that two heads is far, far better than one. One example: over the years, Fantasy Flight has gotten so much better at writing rulebooks—compare Arkham Horror to Eldritch Horror—and one reason why is because they put a technical writer (who is not the designer) and a technical editor on each rulebook now.

I know, I know, board-game publishers don’t have loads of money, but the industry is growing and boats are rising. So here’s a start: don’t just hire an editor; hire a proofreader too. They’ll preserve your editor’s sanity and focus, letting them address other, thornier issues. They’ll save your editor’s time, which may save you money in the process. For the average board-game rulebook, a professional proofreader will cost you around $100. Just do it. You’ll thank me, and I’ll thank you.

Better Practice #2: Use Cross-Checks

Old Assumption: All I have to do is make sure my work is correct. Wrong!

All I have to do is make sure my work is correct. Wrong! Observation: It’s harder to find something missing than something wrong.

It’s harder to find something missing than something wrong. New Assumption: I need to cross-check the designer’s and my assumptions by using multiple verified sources.

When I was working on the first run of Vast: The Crystal Caverns, I made an egregious mistake on its rulebook. I didn’t notice that a critical rule was missing—the Knight always flipped tiles face up when she entered them—and it basically broke the game. Not long after, I met Rob Daviau for the first time to talk shop. I mentioned I’d made a mistake on Vast and wanted to talk about how I could prevent it from happening again. Without hesitation he replied, “Oh yeah, the missing rule about the Knight flipping tiles?” I was mortified.

It didn’t matter that I had fully internalized the original rules on the page. What did matter was that part of the game was missing, and I didn’t know it was missing, and that was my fault, not the designer’s. As the editor or publisher, you can’t assume the designer has written all the rules down. They’ve been so busy making the damn game that—rightfully!—they might have missed something. When you’ve worked on something deeply, it’s hard to dig yourself out of it and look at it clearly—that’s one reason editors exist, after all.

So given that disconnect, here are some ways to ensure nothing’s missing:

After you’re familiar with the rules, have the designer or developer play the game in front of you, and cross-check their play to what’s on the page.

Have the designer or developer watch a game of you and your group.

Watch playthrough videos of other groups, and ask the designer or developer about discrepancies between the video and rules as written.

Ideally, you’ll want to do this multiple times, in different ways. Different angles will provide different insights.

Better Practice #3: Do Usability Testing

(I saved the best one for last.)

Old Assumption: I’ve done a lot of editing. I’m an expert. I know how to write a rulebook and make components that will work well for players. Wrong!

I’ve done a lot of editing. I’m an expert. I know how to write a rulebook and make components that will work well for players. Wrong! Observation: Different people interpret language and symbology differently. What’s clear to one person won’t be clear to another.

Different people interpret language and symbology differently. What’s clear to one person won’t be clear to another. New Assumption: To make a good rulebook and components, I need to listen to the players, digest their feedback, and come up with the best solution to the problem.

When making a good rulebook, egregious text errors are the least of your troubles. As I alluded to in practice #1, any game of reasonable length and complexity will be an intricate system—equivalent to computer code—full of interactions and edge cases that don’t appear when first read on the page. All the subtlety only appears during playtesting—sitting down with real people at the table.

I know, this idea is obvious—playtesting is ubiquitous in the design stage. But it all but disappears in the production stage—not once has a publisher asked me to playtest and iterate the rulebook I’m making, or even to account for particular user feedback based on blind playtests. This is wrong. It’s like having a programmer write some code, never run the program, and send off the code to the end user to compile and run themselves.

Right now, some publishers will put up draft rules and see where peoples’ confusions overlap, and others will send out some copies for blind playtesting and sometimes have testers videotape their games. Both of these methods can get you some information—and the first one is especially good for finding really wacky interpretations if you have a large audience—but both of these methods capture much less useful information than they could. Why? Because people are notoriously bad at summarizing their thought process about something far after the fact.

So what’s the solution? Watching a person in person, in real time, giving you a stream-of-consciousness description of how they’re interpreting the rules they’re reading. Literally just sit them down with the box full of components and the rules, and tell them to try to set up the game and start playing. Ask them to say aloud what they’re thinking. Here’s an excerpt of principles from some usability work I did on the game Root:

Don’t explain to the players (at first)—ask them to explain to you. For example, after explaining the basics of the game, map, and factions, I might walk up to the Marquise player and ask, “Could you walk me through your own faction board?” We want to get inside the players’ heads as they work out on their own what the interface means, and we want to see where they go wrong. This is sometimes even more effective than blind testing, because sometimes a player will internalize something incorrect (about a piece of iconography, for example), never interact with the thing they interpreted incorrectly, and never bring it up in the debrief. Sometimes, when the players are blind, you’re blind too. Don’t answer questions (at first)—ask why they asked. For example, if someone asked, “How does an ambush card work?” it might be because they don’t realize it can be played for its suit, or because they don’t know which clearings you can play it in, or because they don’t know when you can play it. Explaining it right away robs them of the chance to tell you where they were confused. Don’t make judgments (at first)—ask them to. Likewise, if someone asks an either-or question like “Does the Marquise get VP immediately when building a building, or does she get VP at the end of the turn?” I’d ask, “Well, what do you think is correct?” If they answer correctly, you know that you’re at least on the right track—useful information when you’re considering whether and how to add, remove, and modify [information].

Root was the first game that I did a bunch of usability testing on, and I’m happy to say that the feedback coming in about the rulebooks and aid components is overwhelmingly positive. Some people have even said the rulebook is in the top few rulebooks they’ve ever read. But to get there, I went through about five major iterations of the learn-to-play book alone, some of them near-total redesigns, based on sitdowns I did with fresh readers unfamiliar with the game. It would have been absolutely impossible to fine-tune it as much as I did without the support of real players-to-be.

Even the best of editors can’t avoid the curse of knowledge. Experience and expertise helps you make, on average, better and faster decisions, especially on complex issues, but it also separates you from the average reader, which blinds you to your own assumptions about what’s obvious and what’s not. The way you reconnect yourself with your actual readers, your actual players, is to sit down with them and just watch.

In my experience, I’d say a good editor will find no more than 80% of nuanced issues, simply because your editor’s brain is different from other peoples’ brains. To find that remaining 20%, you’ll need to expose your rules and components to outside eyeballs. The more eyeballs, the better. The more varied those eyeballs, the better. Go outside your expected demographic: put them in front of long-time gamers, casual gamers, people who don’t play games. Test them with people who are anxious about reading rules and teaching games. Sometimes it’s hard to sit and watch people stumble—your ego feels on the line; you want it to be great now and oh no why didn’t they get it, they think I’m bad at this, don’t freak out, Josh, just keep watching and listening—but it will make your game stronger, and even better, it will help make the world of games more accessible and friendly to everyone. And that’s beautiful.