TrendNCart

open-url

Evaluation of a pragmatic approach to predicting COVID-19-positive hospital bed occupancy

[ad_1]

Discussion

We implemented and evaluated two regression-based approaches to predicting COVID-19 bed occupancy across acute hospitals in NWL during the pandemic’s third (Delta variant) and fourth (Omicron variant) waves. Both models were based on the hypothesis of a stable linear relationship between unvaccinated cases in the community and occupied acute beds several days later. The simple model used total unvaccinated cases, and the multivariable model split these cases across five age bands. The multivariable model outperformed the simple model on MAPE, except during the rapid growth in cases associated with the start of the Omicron wave in London. During that time, neither model performed well. One factor contributing to this may be the different clinical characteristics of the Omicron variant when compared with the Delta variant of the virus.31 Another factor is that this model does not account for the protective effect of prior infection against subsequent infection and hospitalisation. This could result in the overestimation of both the number of unprotected individuals and the number of COVID-19-positive patients occupying hospital beds.

The predictions made by the multivariable model for bed occupancy between 12 July and 18 October 2021 had an MAPE of 10.8%, representing a reasonable degree of accuracy for the intended use of managing bed capacity. However, even during the subsequent reasonably stable period, this error doubled. Furthermore, likely multicollinearity in this model may render the coefficients unstable and the model unreliable when applied to subsequent time periods. The lack of an intercept term in the model meant that variance inflation factors were not well defined, making this issue harder to address. This is likely a fundamental limitation of the age-band stratified approach. Predictions made by the simple model were worse on average, with MAPE of 29.7% and 22.4% for the two relatively stable periods. The multivariable model in particular appeared to overpredict bed occupancy after actual occupancy levelled off following a period of increase. This effect was driven by increased numbers of unprotected cases in the community that did not translate into commensurate bed occupancy. This may be due to differences in case severity or virus variants at different stages of each wave and potentially to variations in hospital admission and discharge criteria at high occupancy levels. In principle, such differences could be accounted for by retraining the model to reflect the new system dynamics. However, in practice, this is problematic since, at the point of change, there is insufficient data to capture the new relationship between cases and occupancy.

Previous studies of bed occupancy prediction for COVID-19 use various approaches to model validation and evaluation, making direct comparisons of predictive accuracy difficult. For example, Baas et al report a mean absolute error of 1.96 beds for a 5-day prediction horizon at one site and 4.25 at another. Still, they do not report a suitable denominator to convert these to an MAPE.14 Ryu et al achieved a 3.4% MAPE using a 12-hour prediction horizon.19 Bekker et al report a weighted MAPE of 8% with a 3-day horizon and 13% with a 7-day horizon,18 comparable with our multivariable model’s best performance.

The available data limit the modelling approach described in this study, and now that COVID-19 testing is not universally available free of charge in London, the relationship between reported cases in the community and hospital admissions is unlikely to be stable enough for this approach to work effectively. Even when testing is widely available, as it was at the height of the pandemic, the approach will be sensitive to changes in population behaviour. The model approximates unprotected cases using the overall population coverage of the vaccine for infected cases. However, the bulk of any inaccuracy caused by this approximation is absorbed into the model coefficients, and this approximation is, therefore, unlikely to adversely impact the performance of the model. We used a fixed value from the literature for vaccine efficacy in this evaluation. In practice, this parameter will vary with the type of vaccine given (eg, mRNA (messenger ribonucleic acid) vaccines generally provided better longer term protection against hospitalisation than viral vector vaccines), individual demographic and clinical factors and the dominant virus variant at the time. We did not account for immunity acquired through prior infection with COVID-19; in future work, this protective effect could be incorporated into the model in a similar way to that of vaccination. We did not account for local variations in prevalence within NWL nor bed occupancy at individual hospital sites. Although more geographical granularity might enable increased single-site accuracy, smaller sample size would decrease accuracy as London hospitals do not have fixed catchment areas, with patients exercising choice over where to present.

Despite these limitations, the pragmatic modelling approach evaluated in this study appears to match the accuracy of more involved approaches during the periods of relative stability, although not during periods of more fundamental change. Another strength of the approach is the use only of routinely collected data. This approach may, therefore, be suited to rapid response modelling, where health systems need a ‘good enough’ prediction tool in place quickly and without specialised expertise, perhaps at the beginning of a new epidemic or wave. If used in this way, prediction outputs need to be closely monitored against true bed occupancy to detect when a period of poor predictive accuracy is entered. Rapid increases in MAPE can provide a timely indication of a fundamental change in the epidemiology of the pandemic. Additional parameters could also be agreed between analysts and decision makers, for example, setting a threshold for acceptable error rates. The model can be retrained as needed once a new pattern is established. Where expertise is available for more sophisticated modelling, for example, using modified SEIR-D models, there may be potential to adapt more readily to changes in the nature of the virus or the system response.

All model predictions presented in this study were shared on creation, two times a week, and discussed at weekly GOLD Command meetings, aiding understanding of future pressure on hospital bed occupancy, allowing the safe and coordinated opening and closing of additional COVID-19 prioritised areas to support clinical care across multiple departments.

Our evaluation method closely follows the use of the model in practice, with each model run drawing on the latest available data and repeated runs over an extended period. Many previous studies only evaluate the application of the models in question for one or two model runs or use patient-level cross validation of the data rather than evaluating against actual bed occupancy. Our approach yields a more realistic picture of the likely accuracy of the modelling approach.

Future research should investigate whether allowing the vaccine efficacy parameter to be selected during model fitting would improve this approach. There may be benefits in applying this simple modelling approach to other scenarios, such as predicting pressures on the acute hospital system during winter. Although population-wide testing data are unlikely to be available going forward, a similar approach might be adopted using primary care data to provide an early warning for increased emergency department attendance and hospital admissions. Standardised approaches to evaluating bed occupancy prediction models should also be explored through consensus building methods. The existing literature encompasses disparate methods and metrics, making meaningful comparisons of predictive accuracy difficult.

[ad_2]

Source link

Leave a Comment

Your email address will not be published. Required fields are marked *