Measurement

“Thank you for calling Amazon.com, may I help you?” Then — Click! You’re cut off. That’s annoying. You just waited 10 minutes to get through to a human and you mysteriously got disconnected right away.

Or is it mysterious? According to Mike Daisey, Amazon rated their customer service representatives based on the number of calls taken per hour. The best way to get your performance rating up was to hang up on customers, thus increasing the number of calls you can take every hour.

An aberration, you say?

When Jeff Weitzen took over Gateway, he instituted a new policy to save money on customer service calls. “Reps who spent more than 13 minutes talking to a customer didn’t get their monthly bonuses,” writes Katrina Brooker (Business 2.0, April 2001). “As a result, workers began doing just about anything to get customers off the phone: pretending the line wasn’t working, hanging up, or often–at great expense–sending them new parts or computers. Not surprisingly, Gateway’s customer satisfaction rates, once the best in the industry, fell below average.”

It seems like any time you try to measure the performance of knowledge workers, things rapidly disintegrate, and you get what Robert D. Austin calls measurement dysfunction. His book Measuring and Managing Performance in Organizations is an excellent and thorough survey of the subject. Managers like to implement measurement systems, and they like to tie compensation to performance based on these measurement systems. But in the absence of 100% supervision, workers have an incentive to “work to the measurement,” concerning themselves solely with the measurement and not with the actual value or quality of their work.

Software organizations tend to reward programmers who (a) write lots of code and (b) fix lots of bugs. The best way to get ahead in an organization like this is to check in lots of buggy code and fix it all, rather than taking the extra time to get it right in the first place. When you try to fix this problem by penalizing programmers for creating bugs, you create a perverse incentive for them to hide their bugs or not tell the testers about new code they wrote in hopes that fewer bugs will be found. You can’t win.

Fortune 500 CEOs are usually compensated with base salary plus stock options. The stock options are often worth tens or hundreds of millions of dollars, which makes the base pay almost inconsequential. As a result CEOs do everything they can to inflate the price of the stock, even if it comes at the cost of bankrupting or ruining the company (as we’re seeing again and again in the headlines this month.) They’ll do this even if the stock only goes up temporarily, and then sell at the peak. Compensation committees are slow to respond, but their latest brilliant idea is to require the executive to hold the stock until they leave the company. Terrific. Now the incentive is to inflate the price of the stock temporarily and then quit. You can’t win, again.

Don’t take my word for it, read Austin’s book and you’ll understand why this measurement dysfunction is inevitable when you can’t completely supervise workers (which is almost always).

I’ve long claimed that incentive pay isn’t such a hot idea, even if you could measure who was doing a good job and who wasn’t, but Austin reinforces this by showing that you can’t even measure performance, so incentive pay is even less likely to work.



