当前位置: X-MOL 学术Nat. Med. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Patients should be informed when AI systems are used in clinical trials
Nature Medicine ( IF 82.9 ) Pub Date : 2023-05-23 , DOI: 10.1038/s41591-023-02367-8
Subha Perni 1, 2, 3 , Lisa Soleymani Lehmann 2, 4 , Danielle S Bitterman 1, 2
Affiliation  

Artificial intelligence (AI) systems are increasingly being investigated in clinical trials. Trials that use AI must be held to the same ethical standards for risk assessment and disclosure as all human participant studies. All clinical AI systems, especially those under active investigation, have new risks, including human–machine interactions, interpretability and data limitations. The full magnitude and scope of these risks is not yet known because clinical AI integration is still in its infancy. We argue that in light of these risks and the uncertainty therein, disclosure is a minimal standard when patients’ data are being used in an AI clinical trial that may affect clinical decisions, even if written informed consent is not required.



中文翻译:

当人工智能系统用于临床试验时应告知患者

人工智能 (AI) 系统越来越多地在临床试验中得到研究。使用人工智能的试验必须遵守与所有人类参与者研究相同的风险评估和披露道德标准。所有临床人工智能系统,尤其是那些正在积极研究的系统,都存在新的风险,包括人机交互、可解释性和数据限制。这些风险的全部严重程度和范围尚不清楚,因为临床人工智能整合仍处于起步阶段。我们认为,鉴于这些风险和其中的不确定性,当患者数据用于可能影响临床决策的人工智能临床试验时,即使不需要书面知情同意,披露也是最低标准。

更新日期:2023-05-23
down
wechat
bug