My wife was once highly offended by a cartoon mole.

One of the few games I’ve ever been able to get her into was Animal Crossing on the Nintendo GameCube. I thought it would be a good gateway game for her, but one day she announced that she was angry at the game.

“That stupid mole called me a cheater,” she said, and I immediately understood. Though ostensibly there to remind players about the importance of saving their game, Mr. Resetti the mole was also Animal Crossing’s reaction to people who tried to cheat by turning the game off without saving. Doing so would let you try for a better selection of random items in the game’s general store.

But if you did this, Mr. Resetti would know. And he’d be super pissed.

Apparently my wife had assumed the game autosaved and thus just kept hitting the power button when she was done. The result was an unavoidable and lengthy lecture from “the stupid mole” that stood in stark contrast to the saccharin tone of the rest of the game. During one of his diatribes, he actually berates the player by saying “You oughta be ashamed. Huh? What’s that? Speak up, you reset-happy CHEATER.”

That stings even if you WERE purposely resetting the game, because “cheater” is a powerful label. Breaking rules in video games can cover a wide range of activities, and a lot of them aren’t as bad as installing a wallhack or using a lag switch to become an impossible to hit target. There are small acts of cheating that many of us are probably guilty of: using a dictionary to win at Words With Friends, editing save files in Dungeon Defenders to get impossibly awesome equipment, or dropping out of online games in order to avoid getting a loss on our records. Heck, even buying illicit gold in massively multiplayer games can be routine if you know where to look.

What kinds of circumstances make us more or less likely to leap into such transgressions? Besides the threat of punishments like VAC-banning on Steam or getting a Battle.net account suspended, recent research has shown that the threat of having to update our own self-image as a “cheater” or “a dishonest person” can be a surprisingly strong deterrent.

Researchers Christopher Bryan, Benoit Monin, and Gabrielle Adams tested this idea directly on the campus of Stanford University. They approached students and asked them to participate in tasks like flipping a coin ten times while trying to use THE POWER OF THEIR MINDS to make it land on heads as much as possible. They set subjects up to be able to cheat by recruiting them online and asking them to perform the task while sitting there at their home computer. To motivate them to consider cheating, the experimenters offered $1 for every heads the subjects supposedly produced.

Here’s the thing, though: half the subjects were given instructions that proscribed against “cheating” while the other half received almost identical instructions that mentioned “being a cheater.” For example, one group got “PLEASE DON’T CHEAT” at the top of their self-report form, while the other got “PLEASE DON’T BE A CHEATER.” The researchers guessed that the latter would be a more effective deterrent, since it more directly attacked people’s self-concept. And indeed, such a simple nudge caused those in the “don’t be a cheater” condition to report significantly fewer heads. The difference wasn’t huge, but it was there: an average of 4.88 heads for the “Don’t be a cheater” group and an average of 5.49 heads for the “Don’t cheat” group.

Other researchers, though, have found similar and much bigger effects through other, equally simple invocations of self-image. Nina Mazar, On Amir, and Dan Ariely did a great series of experiments where subjects sat in a group and were given sheets of paper, each containing 20 matrices of nine numbers. Their task was to find and circle two numbers in each matrix that added up to 10. Here’s an example I recreated:

Not difficult, but not so trivial that people would be likely to find all 20 pairs of numbers within the 5 minute time limit they were given. In fact, the researchers had done their homework to know that in 5 minutes most people could be expected to solve around 7 of the matrices. In addition to the papers with the matrices, subjects were also given a envelope containing cash, from which they would extract their earnings at the end of the experiment. The more matrix puzzles they solved, the more money they got.

Mazar and her colleagues ran several versions of this experiment, but the general setup is that some subjects were given a chance to cheat. They were told to destroy their papers in a shredder, then self report how many matrices they had solved. So people had both incentive to cheat (they were paid more) and freedom to do it. A control group did the same task, but knew that their answers were actually going to be scored and thus had no chance to cheat.

From previous work with this task the researchers knew that people who could cheat would generally do so, reporting having solved an average of 12 matrices versus the 7 in a control group. But they were interested if they could manipulate the amount of cheating by either protecting or endangering subjects’ self-image –specifically their image of themselves as a honest person.



In one iteration of the experiment they highlighted moral standards by having subjects write down as many of The Ten Commandments as they could. . The result? People who were asked to write down Commandments but had the opportunity to cheat without getting caught didn’t do it. At all. Similar results happened when they had subjects indicate that they understood that their conduct fell under the purview of the university’s honor code. This is one reason why I think games would benefit from having players include anti-cheating messages on loading screens or even agree to an occasional “I agree not to cheat/drop out/grief/whatever” statement before joining a multiplayer match.

But it turns out that people can also be nudged into cheating MORE. In one experiment Mazar and her colleagues wanted to make it easier for people to label their behavior as something other than cheating. To do this, they simply paid people in tokens. This was kind of silly since subjects immediately turned around and exchanged the tokens for cash, but it worked. In fact, it REALLY worked –subjects who cheated to get more tokens reported solving, on average, almost three times as many problems as those in the control group. Just from letting them think “I’m claiming tokens” instead of “I’m stealing money.”

This may sound absurd, but it matches up with the real world quite well. Stealing cash from the register? No way. But taking an extra long lunch break without reporting it or padding an expense report? Those things happen a lot more than stealing cash of equal value. And in fact, I think video games facilitate this kind of stuff by their nature. Nothing in video games is physical or even money. It’s often abstracted. Selling gold in World of Warcraft? Duping items in Diablo 3 and then dumping them in the real money auction house? If you’re saying “That’s different,” or “That’s not cheating” then you’re doing exactly what Mazar describes: protecting your self-concept as a non-cheater by recategorizing your behavior.

But here’s the positive spin on all this: Even when given chances to do otherwise, people in these experiments only cheated a little. Only a few more heads-side-up coin tosses were reported and only a few extra matrices were falsely reported as solved. The main way that people seem to protect their self image is by putting a throttle on their cheating impulses. Reminders of (or attacks on) our self-image can often lower them even more.

Irate cartoon moles are optional but apparently effective.