当前位置: X-MOL 学术J. Am. Med. Inform. Assoc. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Enhancing trust in AI through industry self-governance
Journal of the American Medical Informatics Association ( IF 6.4 ) Pub Date : 2021-04-25 , DOI: 10.1093/jamia/ocab065
Joachim Roski 1 , Ezekiel J Maier 1 , Kevin Vigilante 1 , Elizabeth A Kane 1 , Michael E Matheny 2, 3
Affiliation  

Artificial intelligence (AI) is critical to harnessing value from exponentially growing health and healthcare data. Expectations are high for AI solutions to effectively address current health challenges. However, there have been prior periods of enthusiasm for AI followed by periods of disillusionment, reduced investments, and progress, known as “AI Winters.” We are now at risk of another AI Winter in health/healthcare due to increasing publicity of AI solutions that are not representing touted breakthroughs, and thereby decreasing trust of users in AI. In this article, we first highlight recently published literature on AI risks and mitigation strategies that would be relevant for groups considering designing, implementing, and promoting self-governance. We then describe a process for how a diverse group of stakeholders could develop and define standards for promoting trust, as well as AI risk-mitigating practices through greater industry self-governance. We also describe how adherence to such standards could be verified, specifically through certification/accreditation. Self-governance could be encouraged by governments to complement existing regulatory schema or legislative efforts to mitigate AI risks. Greater adoption of industry self-governance could fill a critical gap to construct a more comprehensive approach to the governance of AI solutions than US legislation/regulations currently encompass. In this more comprehensive approach, AI developers, AI users, and government/legislators all have critical roles to play to advance practices that maintain trust in AI and prevent another AI Winter.

中文翻译:

通过行业自治增强对人工智能的信任

人工智能 (AI) 对于利用呈指数增长的健康和医疗保健数据的价值至关重要。人们对人工智能解决方案有效应对当前的健康挑战寄予厚望。然而,之前有过对人工智能的热情,随后是幻灭、投资减少和进步的时期,被称为“人工智能冬天”。我们现在面临着健康/医疗领域的另一个人工智能寒冬的风险,因为人工智能解决方案的宣传越来越多,这些解决方案并不代表被吹捧的突破,从而降低了用户对人工智能的信任。在本文中,我们首先重点介绍最近发表的有关 AI 风险和缓解策略的文献,这些文献与考虑设计、实施和促进自治的团体相关。然后,我们描述了一个不同的利益相关者群体如何制定和定义标准以促进信任的过程,以及通过更大的行业自治来降低 AI 风险的做法。我们还描述了如何验证对此类标准的遵守情况,特别是通过认证/认可。政府可以鼓励自治,以补充现有的监管模式或立法工作,以减轻人工智能风险。更广泛地采用行业自治可以填补一个关键空白,以构建比美国立法/法规目前包含的更全面的人工智能解决方案治理方法。在这种更全面的方法中,人工智能开发人员、人工智能用户和政府/立法者都可以发挥关键作用,以推进保持对人工智能的信任并防止另一个人工智能寒冬的实践。
更新日期:2021-04-25
down
wechat
bug