当前位置: X-MOL 学术Neuroinformatics › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Intelligible Models for HealthCare: Predicting the Probability of 6-Month Unfavorable Outcome in Patients with Ischemic Stroke
Neuroinformatics ( IF 2.7 ) Pub Date : 2021-08-26 , DOI: 10.1007/s12021-021-09535-6
Xiaobing Feng 1 , Yingrong Hua 2 , Jianjun Zou 3 , Shuopeng Jia 1 , Jiatong Ji 1 , Yan Xing 4 , Junshan Zhou 5 , Jun Liao 4
Affiliation  

Early prediction of unfavorable outcome after ischemic stroke is significant for clinical management. Machine learning as a novel computational modeling technique could help clinicians to address the challenge. We aim to investigate the applicability of machine learning models for individualized prediction in ischemic stroke patients and demonstrate the utility of various model-agnostic explanation techniques for machine learning predictions. A total of 499 consecutive patients with Unfavorable [modified Rankin Scale (mRS) score 3–6, n = 140] and favorable (mRS score 0–2, n = 359) outcome after 6-month from ischemic stroke were enrolled in this study. Four machine learning models, including Random Forest [RF], eXtreme Gradient Boosting [XGBoost], Adaptive Boosting [Adaboost] and Support Vector Machine [SVM] were performed with the area-under-the-curve (AUC): (90.20 ± 0.22)%, (86.91 ± 1.05)%, (86.49 ± 2.35)%, (81.89 ± 2.40)%, respectively. Three global interpretability techniques (Feature Importance shows the contribution of selected features, Partial Dependence Plot aims to visualize the average effect of a feature on the predicted probability of unfavorable outcome, Feature Interaction detects the change in the prediction that occurs by varying the features after considering the individual feature effects) and one local interpretability technique (Shapley Value indicates the probability of unfavorable outcome of different instances) have been applied to present the interpretability techniques via visualization. Thereby, the current study is important for better understanding intelligible healthcare analytics via explanations for the prediction of local and global levels, and potentially reduction of the mortality of patients with ischemic stroke by assisting clinicians in the decision-making process.



中文翻译:

可理解的医疗保健模型:预测缺血性中风患者 6 个月不利结果的可能性

早期预测缺血性卒中后的不良后果对临床管理具有重要意义。机器学习作为一种新颖的计算建模技术可以帮助临床医生应对这一挑战。我们旨在研究机器学习模型在缺血性中风患者个体化预测中的适用性,并展示各种模型不可知论解释技术在机器学习预测中的效用。共有 499 名不良 [改良 Rankin 量表 (mRS) 评分 3-6,n  = 140] 和良好(mRS 评分 0-2,n = 359) 缺血性中风 6 个月后的结果被纳入本研究。四种机器学习模型,包括随机森林 [RF]、极端梯度提升 [XGBoost]、自适应提升 [Adaboost] 和支持向量机 [SVM],使用曲线下面积 (AUC):(90.20 ± 0.22 )%, (86.91 ± 1.05)%, (86.49 ± 2.35)%, (81.89 ± 2.40)%. 三种全局可解释性技术(特征重要性显示所选特征的贡献,部分依赖图旨在可视化特征对不利结果的预测概率的平均影响,特征交互检测通过在考虑单个特征影响后改变特征而发生的预测变化)和一种局部可解释性技术(Shapley 值表示不同实例的不利结果的概率)已被应用于通过可视化呈现可解释性技术。因此,当前的研究对于通过解释局部和全球水平的预测来更好地理解可理解的医疗保健分析非常重要,并且通过协助临床医生进行决策过程可能降低缺血性中风患者的死亡率。

更新日期:2021-08-27
down
wechat
bug