Patient satisfaction has become the latest catchphrase throughout hospital emergency departments. Given that patient satisfaction is poised to become an integral part of health care delivery in this country, we decided to look at some of the potential drawbacks to relying on patient satisfaction scores.

ADVERTISEMENT

Patient satisfaction has become the latest catchphrase throughout hospital emergency departments. Many hospital administrators are under pressure from hospital boards to improve patient satisfaction scores and CMS has indicated that patient satisfaction scores will impact reimbursement to hospitals. Given that patient satisfaction is poised to become an integral part of health care delivery in this country, we decided to look at some of the potential drawbacks to relying on patient satisfaction scores.

We chose to review the data collection and reporting methods of Press Ganey Associates, Inc. for this article. Press Ganey partners with roughly 40% of hospitals in the United States – including more than 10,000 health care facilities – to measure and improve quality of care. Part of Press Ganey’s business model includes sending surveys to patients who have visited a hospital asking them about their impressions of the facilities, the staff, and the physicians. This data is then analyzed and forwarded to participating hospitals. Hospital use the Press Ganey data to judge not only the quality of care being provided in different hospital departments, but also to compare their hospital to other hospitals within the Press Ganey database. In some cases, hospitals even attempt to compare survey data for specific physicians. Even though the surveys are purported to improve the quality of patient care, there are several things you may not know about the survey calculations and their effects upon patient care.

ADVERTISEMENT

The sample size may create unacceptable margins of error – but the survey results don’t tell you that

Press Ganey has stated that a minimum of 30 survey responses is necessary to draw meaningful conclusions from the data it receives and that it will not stand behind statistical analysis when less than 30 responses are received. Despite this statement, comparative data still gets published about hospital departments and about individual physicians when less than 30 responses are received. For example, Dr. Sullivan’s hospital receives approximately 8-10 Press Ganey survey responses per month. Even with this small sample size, Dr. Sullivan’s hospital still receives monthly reports from Press Ganey analyzing the data. During one month, Dr. Sullivan’s emergency department ranked in the first percentile within Press Ganey databases. Two months later, his emergency department ranked in the 99th percentile. How did they do it? Actually, any actions their group took probably made little difference in the subsequent survey data. By the time they were able to take action, some of the data had already been collected for the subsequent month — in which his group received accolades for their excellent satisfaction scores. Which percentiles were representative of their emergency department’s performance? Probably neither. The small sample sizes just created unreliable data upon which the conclusions were based.

The time you spend with critically ill patients may make another department’s satisfaction scores better . . . while making yours worse

Many studies have shown that the time a patient spends waiting for medical care is inversely proportional to that patient’s satisfaction with the visit. Suppose that a patient is brought by ambulance in respiratory distress. After nebulizer treatment and BiPAP fail, you have to intubate the patient. Then the patient’s blood pressure drops. You start inotropic medications, initiate antibiotics, and actively manage the ventilator settings. After an hour and a half of work, the patient is stabilized. You then spend another 30 minutes discussing the patient’s condition with family members, contacting consultants, and writing admission orders. How will the outstanding medical care that you provided affect your satisfaction scores? If anything, your satisfaction scores may drop due to all of the patients who graded you lower because they had an excessive wait while you were busy saving a life.

Patients admitted to the hospital and patients transferred to other hospitals do not receive Press Ganey emergency department satisfaction surveys. While some questions about the emergency department may be included on inpatient surveys, the answers to those questions count toward the inpatient satisfaction scores, not the emergency department satisfaction scores.

The pressures to improve emergency department satisfaction scores may create a significant dilemma with emergency department staff. An online survey of 717 respondents performed by Emergency Physician’s Monthly on its medical blog “WhiteCoat’s Call Room” showed that more than 16% of medical professionals had their employment threatened by low patient satisfaction scores. In addition, 27% of respondents stated that their income was in some way tied to satisfaction scores.

ADVERTISEMENT

When faced with a decision between improving satisfaction scores and unemployment, a clear — and potentially deadly — conflict of interest occurs. Should emergency physicians and nurses provide appropriate yet time-consuming medical care to high acuity patients or should they provide a minimal amount of medical care to the sickest patients so that they can focus more attention on patients who will be completing satisfaction surveys? Sometimes, especially in single-coverage emergency departments where staffing has been cut due to budget constraints, “doing both” may not be an option.

Patient satisfaction data is not random

Did you know that Hillary Clinton won the Democratic presidential nomination in 2008? Really, she did. A random sample of voters from Pennsylvania showed that she was the clear winner. Failing to fully randomize data can adversely impact even a large survey’s conclusions to the point that those conclusions become invalid. As in the election example used above, Press Ganey’s data are not random and are not representative of an emergency department’s patient population.

We already know that Press Ganey’s satisfaction surveys exclude admitted and transferred patients, which creates a significant bias toward low acuity patients. Emergency departments with a large percentage of admits may have lower satisfaction scores solely due to the decreased survey sample pool and to the increasing wait times encountered by low acuity patients while staff is trying to stabilize higher acuity patients.

Another source of non-randomization in Press Ganey’s patient satisfaction data is that patients who leave without being seen will not receive a satisfaction survey. In addition to decreasing the randomness of the sample size, such a bias could create an incentive for staff to encourage unhappy patients in waiting rooms with non-urgent complaints to leave the hospital emergency department without treatment.

Yet another bias against random samples in Press Ganey’s patient satisfaction surveys is that by default, patients can only receive a satisfaction survey every 90 days. While the intent of this limitation is evident – to keep “frequent flyers” from skewing data – the effect is to decrease the randomness of the data … and to further limit the data’s reliability.

Press Ganey has stated that “external validity requires that you only draw conclusions from the patient population that you are sampling.” However, the reports that Press Ganey generates draw conclusions from a sample of non-admitted patients who have not been treated in 90 days and who have actually been seen by a physician in the emergency department.

Instead of limiting the conclusions to this subset of patients, Press Ganey applies its satisfaction scores to “emergency department” as a whole a group much larger and more diverse than the patient population being sampled.

The lack of randomization in Press Ganey data samples was recently highlighted during a press relase regarding emergency department wait times. Press Ganey reported that its 2009 data showed Utah emergency department patients had an average length of stay of 8 hours and 17 minutes, noting that the wait was the worst in the country and calling the wait “staggering.”

Utah ACEP then investigated the claims and discovered that Press Ganey had limited access to data from 65% of all the emergency department visits in Utah. When Utah ACEP reviewed data on 80% of emergency department patients from 2009, it found that the average length of stay in Utah was three hours and 29 minutes – far shorter than Press Ganey’s allegations, and actually ranking Utah in the top 15 states for emergency department throughput.

“Response errors” may dramatically affect survey results

According to the book Asking Questions: The Definitive Guide to Questionnaire Design (Jossey-Bass, 2004), there are four basic factors related to response error: memory, knowledge, motivation, and communication. Each of these has a significant effect on patient satisfaction survey data.

For example, the time lag between a patient’s emergency department visit and the receipt of a survey in the mail may affect a patient’s memory of occurrences in the emergency department.

Patients who are asked to rate the medical skill and quality of physicians or nurses, who are asked to assess the skill with which phlebotomists take blood, or who judge whether medical personnel “took their problem seriously” often have little knowledge upon which to base their assertions.

Patients who are unhappy due to an excessive wait or because they did not receive requested medications may be motivated to show their unhappiness by grading all aspects of their care low, even when most aspects of the care they received were exceptional. Dr. Eric Armbrecht, a statistician and Assistant Professor for St. Louis University’s Center for Outcomes Research echoes this concern, noting that many survey respondents will simply mark the same response throughout all the answers to a survey. He stated that, in general, those who respond to surveys are either very satisfied or are very unsatisfied and want to make a point. These responses tend to cause a “bimodal distribution” with peaks at either end of the scale.

When the problem of secondary motivation and response error was discussed with Press Ganey representatives, they acknowledged that they “heard about this frequently,” but that their surveys would not allow patients with readily apparent ulterior motives (such as those patients seeking narcotics prescriptions) to be excluded from data since it could lead to “cherry picking” patients and could impact the quality of the Press Ganey database.

While these sources of error are not unique to patient satisfaction surveys, it is important to recognize the impact that they may have upon the results of patient satisfaction data.

Catering to patient satisfaction scores increases health care costs

Another question in the Emergency Physicians Monthly survey asked respondents to rate on a 1-10 scale how patient satisfaction scoring affects the amount of testing that they perform. Forty one percent of medical professionals decreased the amount of testing performed while 59% increased the amount of testing they performed due to the effect of patient satisfaction surveys. From a numerical standpoint, with “1” representing a “maximum decrease” in testing performed and “10” representing a “maximum increase” in the amount of testing performed due to effects of survey data, the change in amount of testing performed due to satisfaction data averaged a score of 6.3 – a mild increase.

The increase in testing that survey results tends to cause may also set up a conflict of interest with hospitals that strive to improve patient satisfaction data but that also stand to benefit financially from the increased testing that results from attempting to improve satisfaction scores.

The threat of low survey scores frequently results in inappropriate medical care — and sometimes causes poor patient outcomes

In the Emergency Physician’s Monthly survey, 48% of health care providers reported altering medical treatment due to the potential for a negative report on a patient satisfaction survey, with 10% of those who altered treatment making changes were medically unnecessary 100% of the time. Examples of medically unnecessary treatment provided to improve satisfaction scores included performing unnecessary testing, prescribing medications that were not indicated, admitting patients to hospitals when they did not need hospital admission and writing work excuses that were not warranted. More importantly, 14% of survey respondents stated that they were aware of adverse patient outcomes that resulted from treatment rendered solely due to a concern with patient satisfaction surveys. These adverse outcomes included allergic reactions to unnecessary medications, resistant infections and clostridium difficile colitis from unnecessary antibiotic prescriptions, kidney damage from contrast dye, and medication overdoses.

Hospital liability could increase from the effects of patient satisfaction scores

Pressuring medical providers to improve satisfaction scores to the point that they provide medically unnecessary testing or that they admit patients to hospitals inappropriately may become a source of liability for hospitals. If adverse patient outcomes due to unnecessary medical treatment can be tied to pressures that hospitals place on the medical staff to improve patient satisfaction scores, civil liability to the hospital could result. Knowledgeable lawyers could allege that hospitals or physicians cut corners with critically ill patients in order focus attention on patients who will be receiving satisfaction surveys. In addition, as Medicare payments are scrutinized more closely, billing Medicare for treatments or hospitalizations that are provided solely from pressure to improve patient satisfaction scores will likely receive increased attention from Medicare RAC auditors. A pattern of such overutilization, if able to be substantiated, may be sufficient to warrant sanctions against a hospital. Health care providers who are able to prove how pressures to improve patient satisfaction scores unjustifiably increased costs to Medicare or Medicaid may choose to file “whistleblower” lawsuits in hopes of earning up to 30% of the recovered overpayments hospitals receive. Any perceived retaliation against providers who file these qui tam lawsuits subjects hospitals to even further liability under whistleblower statutes.

Conclusion

More than six in seven of the health care professionals responding to the Emergency Physicians Monthly survey believed that patients used the threat of negative satisfaction scores to obtain inappropriate care. While it is unlikely that 86% of patients are obtaining inappropriate medical care, the health care providers’ negative perceptions of how patients are using satisfaction surveys show the significant detriment that satisfaction surveys have had on the physician/patient relationship. Overemphasis on satisfaction data, especially when that data may be unreliable, is likely to increase the likelihood of inappropriate medical care, increase the costs of health care, demoralize health care professionals, and increase liability for hospitals in the future.

