Multi-modal Prediction of In-Hospital Mortality and In-Hospital Cardiac Arrest in the Intensive Care Unit (ICU)
Sustainable Development Goals
Abstract/Objectives
Results/Contributions
We propose multiple combinations of multimodal machine learning using data-level fusion, feature-level fusion, and decision-level fusion to predict in-hospital death and in-hospital cardiac arrest of patients in the intensive care unit. In our study, we use time-independent (static) data (such as age, gender, and race), time-related (dynamic) data (such as heart rate, breathing rate, and blood oxygen concentration), and imaging data (chest X-ray images). We use logistic regression (LR), random forest (RF), eXtreme Gradient Boosting tree (XgBoost), support vector machine (SVM), and K-nearest neighbor (KNN) to process static data, use long short-term memory (LSTM) to process dynamic data, and use convolutional neural network (CNN) to process chest X-ray images. In our research, we use publicly available MIMIC-IV data for model training and internal validation, and further use publicly available eICU data for external validation. According to the experimental results, the combination of different types of data for prediction is beneficial. The area under the receiver operating characteristic (AUROC) for predicting in-hospital death within one hour reached 0.94, with a sensitivity of 0.88. The AUROC for predicting in-hospital cardiac arrest within one hour reached 0.91 with a sensitivity of 0.88. Compared to results obtained using only static data, the AUROC for predicting death rate increased by 0.09, and sensitivity increased by 0.12. The AUROC for predicting cardiac arrest increased by 0.03, and sensitivity increased by 0.09. Therefore, we believe that this multimodal framework can help clinicians more accurately identify patients at high risk of death or impending cardiac arrest and reduce healthcare costs.