The Best Way to Use Data to Determine Clinical Interventions

One of the most important aspects of managing clinical interventions for quality and cost improvement is to focus not only on defining the intervention, but on how to measure the intervention to determine if it is effective. In other words, what is the best way to measure clinical quality improvement? Fortunately, this is a templated process with specific requirements for tracking and analysis. But before launching into this topic, let’s review the basic steps involved in selecting a clinical intervention.

Before you can measure any results, you must first:

Implement a healthcare enterprise data warehouse (EDW).

Analyze the data to identify the greatest opportunities for quality and cost improvement.

Develop an Aim statement to focus your clinical intervention on a measurable, time-sensitive goal. The Aim statement includes an outcomes measure that shows what the team hopes to achieve through the intervention. Beneath the Aim statement are process measures that serve as incremental indications of progress toward the overall goal.

Identify the root cause of a quality issue by using data and lean improvement techniques like Value Stream Mapping.

Determine what action to take—what intervention to test—to improve the quality issue.

A previous post about managing clinical interventions cited the clinical intervention of reducing a hospital’s heart failure readmission rate as an example. Because 25 percent of patients with heart failure are readmitted within 30 days, our fictional clinical improvement team chose to intervene to reduce its current 30-day readmission rate.

Typically, the team would implement a process change to ensure that, before discharge from the hospital, patients have a follow-up appointment scheduled with their primary care provider, and receive the appropriate education about managing their condition and taking their medication.

The clinical improvement team will look at the current discharge process for heart failure patients, determine the changes required to improve outcomes, and then roll out the process change to staff.

Clinical Quality Improvement: Measuring the Effectiveness of a Clinical Process Change

Once the team has rolled out the process change, they have to use the EDW to measure whether that process change is driving improvement. This measurement process requires statistical analysis, which I’ll present as simply as I can in this post.

We will assume that the team has set “whether a follow-up appointment was scheduled prior to discharge” as one of the primary measures for this intervention. This is a straightforward, binary measure: either the appointment was scheduled, or it wasn’t. An analytics application running on your EDW platform will capture this measure and present it to you on a dashboard in a run chart—a graph that displays data in a time sequence and helps you visualize change over time.

A run chart is a very important tool for measuring improvement, but it doesn’t give you all the information you need to assess the effectiveness of your process change. The next step toward maturing your measurement process is creating a statistical process control (SPC) chart. An SPC chart shows you if your intervention is changing the process in a significant way or whether changes in the data just represent random variation.

Measuring Clinical Interventions Using a Statistical Process Control Chart

Every process involves normal, random variation. Random variation is the sum of many small variations arising from real, yet insignificant, causes that are inherent in any complex system (and hospitals are definitely complex systems!).

Random variation is to be expected. But to measure change, you need a way to determine whether variation you see in the process you’re trying to improve is normal, random variation that can be ignored, or variation that can be attributed to a specific, assignable cause and that should be encouraged or eliminated.

An SPC chart allows you to:

Monitor process variation over time.

Differentiate between random variation and special-cause, assignable variation.

Identify and eliminate unwanted assignable variation.

Assess the effectiveness of the changes you have implemented to improve a process.

The following sample SPC chart will help you understand how this works:

There are different kinds of SPC charts, but the principal elements of each are:

The vertical or Y axis, which represents the individual values you observe.

The horizontal or X axis, which represents time.

The centerline, which indicates your mean performance level. This mean is calculated from the baseline data. In our heart failure example, if the organization has had a process in place for scheduling follow-up appointments before discharge, the

EDW will have baseline data to work with. If the process is a brand new one for an organization, the baseline will simply be zero.

An upper control limit, which is typically three standard deviations above the mean.

A lower control limit, typically three standard deviations below the mean.

By establishing upper and lower control limits, you’re essentially saying, “We believe that this baseline process should yield a value within the control limits.” If you get any data point that falls within this range, you can interpret it as a normal result that the process is designed to yield. If you see data points falling above or below the control limits, you can postulate that they are due to special cause, assignable variation—and you’ll want to determine the root cause of that variation.

Let’s say you implement a process change on a Monday. Your baseline data showed you that your mean is 15 and your upper control limit is 20 and your lower control is 10. Here’s what you might see during that week on your SPC chart:

Monday through Wednesday, you get values between 15 and 20. You can assume you’re seeing random variation and cannot attribute those results to your process change.

On Thursday, your value drops to 12. This too can be attributed to random variation and doesn’t indicate a drop in performance. Anything between 10 and 20 is an expected value for the process.

On Friday, you get a value of 25. Since this is above the upper control limit, you know this change doesn’t represent random variation. Something atypical has happened in the process—and it could very well represent change driven by your initiative.

As you keep tracking this over time, you can accurately measure whether your intervention is driving process improvement.

Obviously, I’ve explained these statistical concepts here at their most basic, but I hope this explanation has given you a sense of the importance of tracking variation when measuring performance.

Measuring Clinical Interventions Requires Patience

Effectively evaluating whether your process change has had an impact requires patience! That is one reason why understanding variation is so important. A clinical improvement team is understandably anxious to see whether their intervention is driving improvement. For example, our heart failure team expects to see an increase in the number of heart failure patients with a scheduled follow-up appointment. But what happens if they see the number of follow-up appointments fall below the mean? Will they assume their intervention is flawed?

The team must be able to determine whether this drop is just an instance of normal variation. Otherwise, they may begin to tamper with their new process—and unnecessary tampering can worsen performance over time. Oversensitivity to tweaking a process in response to single data points is never a good thing.

A second comment about patience: you must let some time pass before you can expect a change to occur. You need at least six to eight data points to get a good sense of a change. If you’re tracking a high-volume process that you can measure daily, such as the number of lab tests performed, you could have those data points within eight days.

But many, if not most, clinical processes are not high volume. Our fictional heart failure team will likely need to track heart failure discharges in terms of months, not days. This means it could be as many as six to eight months before the team has enough data to accurately determine whether the intervention has caused the process to change. The team needs to let that time pass without altering the intervention. Organizational patience is key to success.

The Work of Clinical Quality Measurement Is Never Done

The good news is that there will come a point in your measurement process where the data will show that the process is stabilized and that the process improvement has become the daily standard work. However, you should never stop measuring the process. In a previous article, I wrote about how to sustain clinical quality improvements in three critical steps. Continuous measurement is a key aspect of sustaining improvement. You can simply reduce the frequency of your measurements.