Chaoping Wu, MD
Attending Physician
Cleveland Clinic Foundation
Cleveland, Ohio
Disclosure information not submitted.
Alex Milinovich
Director of research
Cleveland Clinic, United States
Disclosure information not submitted.
Rachael Shirley
QPSI Project Manager
Cleveland Clinic, United States
Disclosure information not submitted.
Eduardo Mireles-Cabodevila, MD
Director , Medical Intensive Care Unit
Cleveland Clinic - Respiratory Institute
Cleveland, Ohio
Disclosure information not submitted.
Abhijit Duggal, MD, MPH, MSc, FACP
Assistant Professor
Cleveland Clinic Foundation, United States
Disclosure information not submitted.
Hassan Khouli, MD
Critical Care Medicine Department Chair
Cleveland Clinic, United States
Disclosure information not submitted.
Anirban Bhattacharyya, MD, MPH
Senior Associate Consultant Critical Care and Transplant
Mayo Clinic, United States
Disclosure information not submitted.
Title: A Comparison between Artificial intelligent Algorithms and Clinicians in Predicting ICU discharges
Introduction: Readmissions and length of stay(LOS) are key qualities in ICU. Delay in transfer might lead to reduced ICU capacity. We aim to develop accurate discharge prediction models using artificial intelligent approaches to predict discharges and compare it with clinicians’ performance.
Methods: In the retrospective study, a total of 12,761 patients and 17,703 unique ICU admissions to Cleveland Clinic Medical ICU from 2015 to 2019 were included. Patients’ demographic information, comorbidities, dynamic lab values, vital signs, medication, urine output, and ICU interventions were extracted from electronic medical records. Four different models were built using logistic regression(LR), random forest(RF), gradient-boost-machine(GBM) framework, and neural network(NN). For each model, 80% of the dataset served to train the model and 20% to validate it. The performance of models was evaluated with area under the receiver operator characteristic(AUC), sensitivity, specificity, and F score. The models were validated with a separate cohort of 506 unique ICU admissions in April 2020. Additionally, head-to-head comparison of models with clinicians to predict discharges were performed.
Results: At 72 hours after discharge, 1160 (6.6%) patients had readmissions or death. In separate models that predict adverse outcomes within 72 hours and eligibility to discharge, the performance of GBM models is superior(AUC 0.76 and 0.87, respectively) to LR(AUC 0.68 and 0.62), RF(AUC 0.61 and 0.79) and NN(AUC 0.62 and 0.78). The most important features affecting discharge prediction are ICU interventions, LOS, Inputs/outputs, and pulse oximetry. In the 2020 validation cohort, GBM model and physicians agreed with patient’s eligibility for discharge in 74%(1423) of predictions. In patients that model predicted eligible for discharge but clinicians, 91% were discharged within 24-48 hours. Among the discharges, the model can predict the eligibility earlier with median duration of 9 [Interquartile range, 1-20] hours.
Conclusion: Real-life data-based machine learning models are consistent with physician prediction and can accurately predict patient’s eligibility for discharge. Compared to physicians, the models could predict them earlier and thus could potentially augment clinician decisions and decrease LOS.