当前位置: X-MOL 学术Nat. Hum. Behav. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Understanding, explaining, and utilizing medical artificial intelligence
Nature Human Behaviour ( IF 21.4 ) Pub Date : 2021-06-28 , DOI: 10.1038/s41562-021-01146-0
Romain Cadario 1 , Chiara Longoni 2 , Carey K Morewedge 2
Affiliation  

Medical artificial intelligence is cost-effective and scalable and often outperforms human providers, yet people are reluctant to use it. We show that resistance to the utilization of medical artificial intelligence is driven by both the subjective difficulty of understanding algorithms (the perception that they are a ‘black box’) and by an illusory subjective understanding of human medical decision-making. In five pre-registered experiments (1–3B: N = 2,699), we find that people exhibit an illusory understanding of human medical decision-making (study 1). This leads people to believe they better understand decisions made by human than algorithmic healthcare providers (studies 2A,B), which makes them more reluctant to utilize algorithmic than human providers (studies 3A,B). Fortunately, brief interventions that increase subjective understanding of algorithmic decision processes increase willingness to utilize algorithmic healthcare providers (studies 3A,B). A sixth study on Google Ads for an algorithmic skin cancer detection app finds that the effectiveness of such interventions generalizes to field settings (study 4: N = 14,013).



中文翻译:

理解、解释和利用医学人工智能

医学人工智能具有成本效益和可扩展性,通常优于人类提供者,但人们不愿意使用它。我们表明,对使用医学人工智能的抵制是由理解算法的主观困难(认为它们是“黑匣子”)和对人类医学决策的虚幻主观理解驱动的。在五个预先注册的实验中(1-3B:N = 2,699),我们发现人们对人类医疗决策表现出一种错觉(研究 1)。这使人们相信他们比算法医疗保健提供者更能理解人类做出的决定(研究 2A、B),这使得他们比人类提供者更不愿意使用算法(研究 3A、B)。幸运的是,增加对算法决策过程的主观理解的简短干预增加了使用算法医疗保健提供者的意愿(研究 3A、B)。针对算法皮肤癌检测应用程序的 Google Ads 的第六项研究发现,此类干预措施的有效性推广到现场设置(研究 4:N  = 14,013)。

更新日期:2021-06-28
down
wechat
bug