ONE summer day 14 years ago, when I was a new cardiology fellow, my colleagues and I were discussing the case of an elderly man with worsening chest pains who had been transferred to our hospital to have coronary bypass surgery. We studied the information in his file: On an angiogram, his coronary arteries looked like sausage links, sectioned off by tight blockages. He had diabetes, high blood pressure and poor kidney function, and in the past he had suffered a heart attack and a stroke. Could the surgeons safely operate?

In most cases, surgeons have to actually see a patient to determine whether the benefits of surgery outweigh the risks. But in this case, a senior surgeon, on the basis of the file alone, said the patient was too “high risk.” The reason he gave was that state agencies monitoring surgical outcomes would penalize him for a bad result. He was referring to surgical “report cards,” a quality-improvement program that began in New York State in the early 1990s and has since spread to many other states.

The purpose of these report cards was to improve cardiac surgery by tracking surgical outcomes, sharing the results with hospitals and the public, and when necessary, placing surgeons or surgical programs on probation. The idea was that surgeons who did not measure up to their colleagues would be forced to improve.

But the report cards backfired. They often penalized surgeons, like the senior surgeon at my hospital, who were aggressive about treating very sick patients and thus incurred higher mortality rates. When the statistics were publicized, some talented surgeons with higher-than-expected mortality statistics lost their operating privileges, while others, whose risk aversion had earned them lower-than-predicted rates, used the report cards to promote their services in advertisements.