TrendNCart

society-logo-bcs-informatics

Reliability of comorbidity scores derived from administrative data in the tertiary hospital intensive care setting: a cross-sectional study

[ad_1]

Discussion

We undertook a retrospective cross-sectional review of patient records (chart review) and administrative coding data for comorbidities in 100 patients admitted to an adult general intensive care ward. We found that administrative data significantly under-reported comorbidities present in the patient records in the majority of cases. Our findings are, in general, consistent with several previous reports.17–25

In contrast to our overall findings, we found a small number of comorbidities that were reliably reported (κ≥0.4) in the administrative (coding) data. These were CHF, CKD, DMC, solid-organ cancer, and metastatic cancer.

In 1999, Kieszak et al performed a study examining the CCI of carotid endarterectomy cases at a single health service.25 Coded data obtained from an administrative database were compared with a medical chart review and concluded that medical chart review was superior to audited data. A few years later, Quan et al conducted a similar study looking at all inpatients in a large health service and showed that, overall, coded data tended to under-report comorbidities.41 Youssef et al examined data for general medical inpatients in Saudi Arabia and drew a similar conclusion.29 Recently, this has been confirmed in a Norwegian general intensive care population by Stavem et al42 (table 4). In addition to those comorbidities in our study that were more reliably reported, Stavem et al’s individual comorbidities were also more reliably coded for cerebrovascular disease, dementia, and mild liver disease. As our institution has a similar casemix and size to their study, such differences could be accounted for by differences in coding methodology. Nevertheless, from two studies in separate countries, it is clear that certain comorbidities are more reliably coded than others and may provide guidance regarding data that should be included in risk-prediction models when comparing health services in different geographical locations.

Audited and coded inter-rater agreement

We selected adult admissions to an intensive care setting at a tertiary hospital with a high proportion of mechanically-ventilated patients because we expected these patients to more likely have comorbidities, and these comorbidities were likely to influence the level of casemix funding and thus be more reliably coded. It is not unexpected for chronic conditions that do not require intervention and do not affect funding to be excluded during the coding process. We did not ascertain the effect of coding on funding of patient episodes since this was not our primary aim; however, it has been suggested that the CCI is an inadequate predictor of resource utilisation.43

While there were no statistically significant differences in CCI and EI scores between the ventilated and non-ventilated patients, non-ventilated patients had a higher number of coded comorbidities, were more likely to stay in intensive care for longer, and had an increased incidence of mortality in the same admission. This may be explained by the possibility that the non-ventilated patient group might have included a sizeable population in which ventilation was either not deemed to be of therapeutic benefit because of a lack of a clear indication, or because of a poor prognosis due to a number of other comorbities that might not have been captured by the CCI or EI. Overall, our observation of lower inter-rater agreement compared with other hospital settings25 29 41 is consistent with the hypothesis that coding reliability may be inversely proportional to the number of comorbidities.34

The Charlson methodology is more commonly used in risk-adjustment than the Elixhauser methodology,7 even though it was derived from a small and specific cancer population using chart review. The EI, which was derived from administrative data from a large population and broad casemix, identifies a higher number of comorbidities. Our results suggest that the under-reporting of EI is comparable with the CCI, and that administrative data may not be reliable in generating either CCI or EI scores for intensive care patients.

There are several practical implications of our findings. The use of administrative data in ICUs to predict mortality through use of the CCI and EI should be viewed with a great degree of caution. The Charlson comorbidities and the derived CCI score are commonly used for risk-adjustment in several mortality prediction models constructed from administrative data. In Victoria, these include the Health Round Table Reports,44 the National CHBOI mortality index,2 and the Dr Foster methodology.45 This is in contrast to models such as the Critical Care Outcome Prediction Equation (COPE) and Hospital Outcome Prediction Equation (HOPE), which do not include comorbidities.46 47 Based on our study, predicted morbidity and mortality in ICUs is likely to be under-reported when such models are based on administrative coding data.

If the CCI or EI are included in a mortality prediction model, such as a hospital standardised mortality ratio (HSMR) that is derived from administrative data, then several errors may result. First, a systemic bias due to under-reporting will be incorporated into the model. Reliance on administrative data for CCI may result in under-reporting of comorbidities and incomplete assessment of patient risk.

Second, any variation in reporting of comorbidities between institutions will lead to misleading comparative results. A health service that under-reports comorbidities will have lower CCI and EI scores resulting in these patients appearing to be healthier. This will reduce the size of the mortality denominator and produce a higher than expected HSMR. Such an institution will misleadingly appear to be a poor performer.

Thirdly, chart review of a random selection of patients may aid a ‘poor performing’ health service in identifying this as a potential source of bias in their report card. A better solution is for prediction models to identify and incorporate only those comorbidities that are reliably coded (CHF, CKD, COPD, cancer), rather than rely on the less accurate index scores (such as CCI and EI) that incorporate comorbidities that are unreliably reported. The optimal source from where not only the most accurate, but also the most efficient, CCI can be obtained also warrants further investigation.48 The increasing prevalence of EMR provides a potential for capturing large data more uniformly.36 With this, questions are raised regarding which types of algorithms are more effective and whether medical language processing can be standardised across different practice settings and health services.43 Furthermore, the widespread use of EMR for national safety and quality purposes requires standardisation of data management processes and compliance with regulatory requirements.49

Our study had a number of limitations. The sample size was relatively small and limited to a single site, reducing the precision of the estimates and the power to detect differences for some conditions. We conducted our study in the intensive care setting, and our results may not be generalisable to other patient groups, departmental settings, or hospital sites. We found evidence of systematic bias in the EI score that may reflect local coding rules. Our results should be viewed with caution and require validation in a larger cohort.

[ad_2]

Source link

Leave a Comment

Your email address will not be published. Required fields are marked *