Decrease (-) Restore Default Increase (+)
Print    Email
Font Size Print Email
Bookmark and Share
left
right
cap_wrapper_header
cap_wrapper_left
Our Locations
Quality & Safety
About Our Report Card
How to Use Quality Report
Technical Notes
FAQ
Disclaimer
Quality Report Contact
Influenza and You 2013
Safe Practices
spacer

Technical Notes

1. How did we decide when to color-code performance on a numeric indicator red or green? 

We use an objective statistical test. We apply the red and green coloring only if the difference from the national average is big enough to be "statistically significant" - and is not just random variation. We use standard statistical techniques to construct 99% confidence limits around our performance. If the national average is within the confidence interval, we consider our results "near the national average." Otherwise, we color-code our performance better (green) or worse (red) than the national average

2. Why is Hospital A "average," and Hospital B "better than average," when Hospital B has a worse percentage than Hospital A? 

Hospital A didn't have as many cases as Hospital B.

Standard statistical techniques don't look just at how much a hospital's performance differs from the nation's. These techniques ensure that the difference isn't just random variation. Statistical techniques become more sensitive (have more "power"), when they're based on more cases. A hospital that has more cases is more likely to be shown in red or green than a hospital with fewer cases is.

Example: If the national complication rate for some indicator is 8%, a hospital with 50 cases and a 2% complication rate will be shown as average. Meanwhile, even though its complication rate is higher than 2%, a hospital with 500 cases and a 4% complication rate will be shown as better than average. Standard statistical techniques compare each hospital to the national average - not to another hospital - and the question they ask is, "Is this difference more than the luck of the draw?"

While situations, such as that described in the question, seem odd at first, they make sense. It may help to consider an extreme example.

Imagine a hospital that had only one case, a case that did not have a complication. Even though the hospital's complication rate is 0%, you probably aren't impressed that the hospital's rate is better than the national average of 8%. What if the hospital had two cases and no complications, is it truly better than the national average? You probably still think you have too few cases to make a judgment. The math behind the statistical comparison agrees with you, and it determines how many cases it takes ensure the results aren't just random variability. The more cases behind a statistic, the more likely the statistic is to be colored better than (green) or worse than (red) the national average.

3. How does risk adjustment work? 

Risk adjustment is a mathematical calculation that takes into account differences in patients and procedures. In this report, we use the analyses provided by the national organizations that supply the comparative data. 

There are different ways to risk-adjust data, and all methods of risk adjustment "level the playing field" by comparing the hospital's performance to the national average for patients with the same risk.  To understand what this means, consider the following example of rates of complication after surgery. 

All patients

# of complications

# of
patients

Hospital's complication rate

National complication rate

Hospital A

66

600

11%

6%

Hospital B

33

540

6%

Hospital A's rate is almost twice the national average. It seems that Hospital B is a much safer place to have surgery.  Now let's take risk into account.  Patients at high risk might be those who are over 75 years of age; patients at low risk would be everyone else. 

High risk patients

# of complications

# of
patients

Hospital's complication rate

National complication rate

Hospital A

60

400

15%

16%

Hospital B

8

40

20%

Low risk patients

# of complications

# of
patients

Hospital's complication rate

National complication rate

Hospital A

6

200

3%

4%

Hospital B

25

500

5%

For both high and low risk patients, Hospital A's rates are actually lower than the national rates, and Hospital B's rates are higher. A lot of the patients at Hospital A have complications because a lot of the patients are at high risk, and the hospital's overall complication rate is high because it sees a larger proportion of high-risk patients than the average U.S. hospital does.

We can create one, risk-adjusted number for Hospital A using a method called indirect risk-adjustment. First, we predict the number of complications that would have occurred if this hospital's rate for each risk group matched the national average.  To do this we multiply the number of patients in a risk group by the national complication rate for each group. Predicted complications is 16% of 400 high risk patients plus 4% of 200 low risk patients = 64 + 8 = 72 complications.  The actual number of complications 60 + 6 = 66, so the ratio of actual to expected is 66/72 = .917.  Now we multiple the overall national rate of 6% by the ratio .917 that we just calculated, and we get 5.5%.  This is Hospital A's risk adjusted rate.  Hospital B's risk-adjusted rate is 7.5%.  Now that we've adjusted the scores, it is easy to see that Hospital A is a safe hospital, with a complication rate that is slightly better than the national average.

So, if we don't risk-adjust the hospital's rate, it has a complication rate nearly twice the national average. If we do risk-adjust the hospital's rate, we give it credit for its tougher cases. We say the hospital's risk-adjusted complication rate is 5.5%, compared to the U.S. rate of 6 % -- which gives a much more accurate representation of the hospital's performance.

4. Where did these indicators come from?

Saint Joseph Health is responding to lists of indicators and safe practices endorsed by national healthcare quality organizations. Click on an indicator to find the national organization endorsing the particular indicator. Table A gives the items on each national list and shows where each item is in the Saint Joseph Health Quality Report. We show our data for the entire list of indicators. This comprehensiveness is part of our assurance to the public that we give a complete picture of our quality.

Here are the organizations and the lists included in this report. Click the links for background information on the organizations, as well as detailed definitions and supporting research for the indicators and safe practices:

5. Where are the data sources for these numbers?

The core measure data that are reported to The Joint Commission and CMS come from chart abstraction.  A nurse who is very knowledgeable about patient care and about the set of indicators reviews the charts of all or a sample of the patients eligible for a measure.  The abstractor has a strict set of rules to follow. This is also how we get the data about heart surgery and procedures that are submitted to STS and ACC.  Some data require that a trained observer actually watches and records what happens during patient care.  Data on hand washing and on room cleaning are examples of this.  Another source is the billing information.  In order for the hospital to bill for all the care that was provided, codes are assigned for each of the patient's diagnoses and for each of the procedures that were done.  The coded data is the source for the CMS HACs and the AHRQ indicators. Finally, we do quarterly prevalence studies in some of the facilities to measure the presence of pressure ulcers and the use of restraints.

6. What are some of the known limitations of our report on these indicators and safe practices?

Perhaps the most important limitation is that the nationally endorsed lists cover so little of what prospective patients might want to know about a hospital's performance. Much more extensive information is needed to evaluate hospital care at the level of specific procedures and conditions - rather than trying to capture hospital-wide complication rates, for example. There are almost no indicators that address outpatient care or events that occur after the patient's hospital stay. The current lists of indicators are essentially silent about the patient's long-term survival and condition.

Current medical records codes do not capture important factors that should be used - but can't be used - to adjust the statistics. The data do not distinguish an emergency case from one where more time was available to react. The data do not indicate if the patient had "do not resuscitate" orders, which would indicate that the patient's death was expected and not a result of the care provided. Hospitals also differ in their documentation and coding practices.

Only a limited number of codes are used in risk-adjustment models, and we may not capture some important risk factors in our data. We are probably not risk-adjusting the PSIs and IQIs as much as they should be. This limitation may be trivial for some of the indicators, but may lead to greater inaccuracy for high-risk patients and procedures.

The number of procedures performed is at best a proxy for other quality indicators. Some authorities suggest not using these volume-based indicators at all; others suggest using them only in conjunction with other indicators of quality of care.

We cannot be certain about the comparability of the U.S. and Kentucky averages. The U.S. average may be based on a biased sample of states or hospitals. For example, the average on a particular indicator may be too high, because it is based only upon hospitals proud or interested enough to submit their data to a national group. Presumably, the comparative average could also be too low, if - for example - states with high-risk or older populations are over-represented in the data.

Data from one-day prevalence studies and limited periods of observation are subject to time-of-year and low-volume variability and may not accurately represent what more complete data would show.

Although we follow national definitions, we still have countless judgment calls to make about how to display data, how to classify data for some indicators, etc. We hope that the Saint Joseph Health Quality Report helps contribute to the growing national and state interest in quantifying hospital quality performance and helps hasten the day when hospitals will have agreed-upon standard approaches to these decisions.

Who We Are

Contact & Connect

Hospitals & Locations

Health Education & Tools

cap_wrapper_right
cap_wrapper_footer