Think of a hospital report card like any diagnostic test. They have an accuracy, false positive value and false negative value (for an in depth look at the accuracy of diagnostics go to the Users’ Guide to the Medical Literature). Tests will also be affected by the pre-test probability and the likelihood of further ‘tests’ being done (verification bias). The classic example is the diagnosis of pulmonary embolus. When a patient develops shortness of breath after surgery there is a chance that it’s a blood clot in the lung. Each test that is run has a certain likelihood of either diagnosing or ruling out disease when it is truly present or absent but few are completely accurate. For that reason, clinicians learn to treat PE based on probabilities rather than a true diagnosis. A subject dearer to my heart is the accuracy of CT scans in diagnosing deep neck infections which has similar inaccuracies that effect treatment.
The hospital report cards act like diagnostic tests for the hospitals. They test for inadequate performance. Take for example the reported rate of mortality from acute myocardial infarction (AMI) between hospitals. The reported mortality rate from AMI needs to be corrected for patient factors such as age, gender, cardiac severity and comorbid status. Even if one could perfectly risk adjust the data there is still the chance of random error misclassifying a hospital. A hospital that has, in truth, an unacceptably high mortality rate from AMI may be classified as normal (false negative). Conversely, a hospital with an acceptable mortality rate could be classified as substandard (false positive). .
As has been pointed out by Dr. Wes these report cards effect funding. They change where patients go for care, where public funds come from and which hospitals are targeted for quality improvement. It stands to reason then that the impact on hospitals is going to effect how the report cards are created. In addition to the paper quoted by Wes, a another recent study out of ICES (coincidently authored by Peter Austin, a co-author of mine and consultant to my Master’s Thesis) explored the relationship between outcome and report card design.
According to correspondance with Dr. Austin, "one can conceive of hospitals as either having acceptable performance or unacceptable performance. False positives (incorrectly classifying as having unacceptable performance a hospital that truly has acceptable performance) results in penalties being born by the hospital (damage to reputation, decreased staff morale, loss of referrals and business) and potentially by the patient (decreased local access to services if services were to be moved to a regional centre). False negatives (incorrectly labeling as acceptable a hospital that truly has unacceptable performance) result in penalties (or costs) being born by patients (unnecessary exposure to increased risk).
Hospital report cards can result in false positives and false negatives. How one weights the relative costs incurred due to false positives and false negatives can influence the threshold that one uses for classifying a hospital as having unacceptable performance."
Based on my reading, scorecards might be useful for improvement projects within a single hospital and identifying outliers that fall below acceptable standards assuming that you can clinically justify the standards, adjust for co-morbidities and impact. Even then, the research on how to normalize report cards is still in its’ infancy. I think they should be used with extreme caution by the public and by public I mean anyone without a PhD in statistics. The use of hospital report cards by the community to identify subtle differences in care is difficult at best and misleading at worst.
*this post was ameneded on June 20th after correspondance with Dr. Autin.