当前位置: X-MOL 学术ICMx › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Ethical considerations about artificial intelligence for prognostication in intensive care
Intensive Care Medicine Experimental Pub Date : 2019-12-01 , DOI: 10.1186/s40635-019-0286-6
Michael Beil 1 , Ingo Proft 1, 2 , Daniel van Heerden 3 , Sigal Sviri 4 , Peter Vernon van Heerden 4
Affiliation  

BackgroundPrognosticating the course of diseases to inform decision-making is a key component of intensive care medicine. For several applications in medicine, new methods from the field of artificial intelligence (AI) and machine learning have already outperformed conventional prediction models. Due to their technical characteristics, these methods will present new ethical challenges to the intensivist.ResultsIn addition to the standards of data stewardship in medicine, the selection of datasets and algorithms to create AI prognostication models must involve extensive scrutiny to avoid biases and, consequently, injustice against individuals or groups of patients. Assessment of these models for compliance with the ethical principles of beneficence and non-maleficence should also include quantification of predictive uncertainty. Respect for patients’ autonomy during decision-making requires transparency of the data processing by AI models to explain the predictions derived from these models. Moreover, a system of continuous oversight can help to maintain public trust in this technology. Based on these considerations as well as recent guidelines, we propose a pathway to an ethical implementation of AI-based prognostication. It includes a checklist for new AI models that deals with medical and technical topics as well as patient- and system-centered issues.ConclusionAI models for prognostication will become valuable tools in intensive care. However, they require technical refinement and a careful implementation according to the standards of medical ethics.

中文翻译:

关于人工智能预测重症监护的伦理考虑

背景预测疾病进程以告知决策是重症监护医学的关键组成部分。对于医学领域的一些应用,人工智能 (AI) 和机器学习领域的新方法已经超越了传统的预测模型。由于其技术特点,这些方法将给重症医师带来新的伦理挑战。 结果除了医学数据管理标准之外,创建 AI 预测模型的数据集和算法的选择必须涉及广泛的审查以避免偏见,因此,对个人或患者群体的不公正。对这些模型是否符合善意和非恶意的道德原则的评估还应包括对预测不确定性的量化。在决策过程中尊重患者的自主权需要 AI 模型数据处理的透明度,以解释从这些模型得出的预测。此外,持续监督系统有助于保持公众对这项技术的信任。基于这些考虑以及最近的指导方针,我们提出了一种实现基于人工智能的预测的道德实施途径。它包括处理医疗和技术主题以及以患者和系统为中心的问题的新 AI 模型的清单。结论 AI 预测模型将成为重症监护中的宝贵工具。但是,它们需要根据医学伦理标准进行技术改进和仔细实施。
更新日期:2019-12-01
down
wechat
bug