For some reason Fable and KOTOR's reputation systems keep coming up among my group of friends. I personally think those were an interesting try, but ultimately unsuccessful. The problem I have with them is that they come really close, but every once in a while the good/evil system kind of . . . breaks down.

The best example I have is with the beginning of Fable. My friend was playing at the time. There's a wife who tells you to go find her husband. If you find him then come back to tell her where he is, she'll give you a coin. Yay. So you go find him, and he's making out with another girl. He says if you don't tell anyone, he'll give you a coin.

Now, first off, the game is making the assumption that the husband is the bad guy. Second, the game is making the assumption that the husband doesn't deserve his privacy, because he's being the bad guy. I'd argue with both of those (or, at least, argue with it being an obvious assumption), but that's a moral position.

The big issue is that the game assumes your actions are motivated by the results for the participants, and not for cold hard cash. Well, my friend wanted the cash. So he promised he wouldn't tell the wife, and got a coin, and then promptly went and ratted the guy out, and got another coin. Amusingly, grabbing the first coin caused him to become more evil, but the second caused him to become more good – despite the fact that he was playing "I don't care who I hurt, I just want cashola".

The developer can't solve this problem. The only possible "solution" is to quiz the player, at every step, on why they're doing the thing that they're currently doing. And, as entertaining as the idea is, that's not really a solution.

The best fix to this is convenient, because it's actually practical, more realistic, and more interesting. Reputation shouldn't be a global number. It's not a single axis, where priests love you on one side and Hitler loves you on the other. Reputation is a personal thing and a group thing, but not a global thing. Luckily computers are now powerful enough that we can manage this without much trouble at all.

What should have happened is surprisingly simple. Let's assume, for the sake of argument, that the husband is a jerk and that the wife is really in the right. You promise the wife that you'll help her out, and she thinks slightly more of you. You promise the husband that you'll keep his secret, and he thinks a lot more of you. Now you go back and rat him out, the wife thinks a lot more of you (I presume you don't mention "oh yeah I took his money first), but once he finds out – well, the husband hates you.

Now add in communication. The husband probably talks to his friends. He might mention "that kid who ratted me out, after taking my money!" Now they won't like you as much. Now, the husband's friends are probably a bunch of lowlifes, since like attracts like. So now you've got all the scum disliking you. No problem here, right?

The wife is thankful that you helped out. She'll probably mention that to her friends. So now her friends like you a bit (and, incidentally, like the husband quite a lot less – we may as well be thorough here). No problem here either.

But when the two groups meet up and compare notes, if they in fact ever do, they might realize that, okay, so you did tell the truth – but you're also not to be trusted. And neither of them are going to appreciate that – the wife's side, or the husband's side. There's the proper penalty, and it's simply not a single axis.

This isn't even particularly hard to orchestrate. You can simulate it pretty easily by sending "knowledge packets" between characters. Husband knows "kid cannot be trusted", and that knowledge has a chance of duplicating itself every time someone with it interacts with someone else. Wife knows "kid helped me out", and that also has a chance of duplicating on each interaction. Give both of those infopackets some effect on how people think of you ("cannot be trusted" might actually be a good thing in some circles!) and then add in some broad strokes of who-talks-to-who – which can be very simple and vague, to the point of "people in this village tend to talk to each other and people tend to talk to their spouses" and still be effective – and you've got a system where things like that actually can hurt you later on.

Throw in a few ways to assemble text and you're golden. Player talks to NPC, asks for favor, NPC checks its databases and sees it doesn't like player. Why not? Randomly pick one of its reasons, weighted based on importance. "Well, I'd like to help you out, but I heard about {what you did to Jeffrey off in Smalltown} . . . and I just don't think I can trust you, can I?"

And remember, "kid can't be trusted" might endear you to some certain groups. You might have honorable thieves, sure, but it's entirely possible you could have dishonorable thieves too, who slap you on the back and buy you a drink if you tell the story of How I Got Two Coins Instead Of Just One (right before spiking your drink and stealing your wallet.) You can code this into them without too much trouble – just set it up so that "this person is low on trustworthiness to me" is a good thing. I imagine you'd have several major axes of opinion – "trustworthiness", "helpfulness", "power" – and certain people could easily react differently to each axis. In fact there's probably a fascinating research paper to figure out what major axes people actually think of each other on – in real life, the phrase "he's a nice guy, but he's just not very bright" shows up quite often, and that's just a simple example.

Is that cool? I think it's cool. And it neatly avoids trying to divide every action into "good" and "evil" – your actions will be important, not based on the nature of the action, but based on how the characters around you regard that action.