[How to handle the population and testing of an open world game with thousands of items and hundreds of NPCs? Capcom Game Studio Vancouver technical director Tom Niwinski describes the toolset the team constructed for Dead Rising 2.]

Building a large open world game is hard. Add thousands of items, hundreds of NPCs, a few hundred quests, and it starts to become very difficult to keep a handle on where things are. Dead Rising 2 was no exception to this.

We needed a good toolset to allow our designers to effectively add new items to our world, find where they placed their items, and iterate on their work to provide a high quality experience for our fans. During the rest of this article we will discuss our toolsets and what worked well when populating our world[i].

The Size of the Problem

The Dead Rising 2 world is roughly a square kilometer in size[ii]. Over the course of the project about 20 people worked concurrently to create roughly 300 missions, 200 NPCs and about 18,000 items that fill the world.

All of these entities needed to be designed, created, iterated and tested, and these are just the things that shipped in the final game (we didn't count the thousands of things that were left on the cutting room floor).

We quickly realized that we needed a toolset that would scale smoothly as the game got bigger, find a good workflow for concurrent placement, and figure out how our content creators could iterate quickly.

The data management came in two flavors. The content creator data (the item placements, NPCs, etc) and the game generated data (pipeline logs, game telemetry, scheduling software, etc). Each had its own set of challenges.



Perhaps we didn't think this all the way through.

Tool Philosophy

The main philosophy is to enable our content creators to do their work without programmer intervention. That's to say, the tools should be friendly enough that any designer can add/modify/delete content on their own (one could call this a data-driven approach to game making). The more content creators can iterate on their own, the more interesting the creations they develop. More importantly their ownership and interest in their work grows drastically, and the features become significantly more polished. Here are the big successes we've had during Dead Rising 2:

Live Editing. Typically a tool will be some sort of viewer that allows a designer to tweak their content data. That data is then compiled, built, or baked, and somehow makes its way into the game. Often this process is long -- minutes if not hours -- involves restarting the game, and usually involves having a programmer enter some secret code.

It seemed much better to have a designer move objects around at run time from the comfort of their PC, so we developed a communications protocol that could talk between a game console and the PC. This was a bidirectional protocol that allowed commands from tools to activate RPCs in the game, and allowed the game to send information back to the tools. Tools could now make the game do things (like spawn objects, move their locations, change an items attributes, etc.) at runtime.

Tools could now be written in the language that was right for the task (C#, Python, Qt, etc.) Designers had nice GUIs with standard buttons, drop downs, menus, etc. While they were creating the content they could easily see their work in the game, and when they were done they only needed to check their work into our source control (no extra compiling, bundling, baking, etc). This meant there was no separate viewer for the programmers to support (so when some new rendering feature was added to the engine it didn't need to be ported into all of the tools, it was already in the viewer.)

The downside was that if the build broke, content creation was negatively affected. This made it that much more important to have solid submissions into our source control. It was also more difficult to do bulk operations, or operations that would require editing across levels. These negatives were easily offset by being able to see exactly what the user would see while editing, knowing that everything fit in memory and would work on the target platform.

The Pile of Data. There needed to be a common place to get/put/query generated data. Hopefully you are now thinking "a database." If your first thought was a directory on the network, after you finish this article, go and install your favorite database and stop writing custom scripts to do selects and joins on text files.

Dealing with text files is easy; any junior programmer can open a file, parse it then write it out again. However, this creates a random assortment of data files, probably in different formats, and to get any useful information out means some custom coding. Having everything in a database ensures that info is in one place, and getting answers to questions about your data is only a SQL query away. Not to mention one can now stop writing/debugging parsing code and worrying about concurrent user access.

Having said all that, for content created data, the tools worked on text files on local machines. Tools that connected to live databases would result in non-deterministic behavior of the game. In other words, while other people worked, they could alter your game. It would also be very difficult to have proper revision control for the data. A copy of the content creator data would be added to the database during the nightly build process so that we could treat this as generated data and run validation tests and other various queries without having to join text files.

[i] I say populated, as the world geometry and textures were built with an off the shelf 3D packages, it would silly to create/support/educate a propriety 3D package when finished solutions are readily available.

[ii] While it may seem small, walking a kilometer will take about 12 minutes; that’s why Chuck drives the motorbike.