当前位置: X-MOL 学术 › Palgrave Commun. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Mind the gap! On the future of AI research
Palgrave Communications Pub Date : 2021-03-15 , DOI: 10.1057/s41599-021-00750-9
Emma Dahlin

Research on AI tends to analytically separate technical and social issues, viewing AI first as a technical object that only later, after it has been implemented, may have social consequences. This commentary paper discusses how some of the challenges of AI research relate to the gap between technological and social analyses, and it proposes steps ahead for how to practically achieve prosperous collaborations for future AI research. The discussion draws upon three examples to illustrate the analytical gap in different phases of the development of AI systems. Attending to the planning phase, the first example highlights the risk of oversimplifying the task for an AI system by not incorporating a social analysis at the outset of the development. The second example illuminates the issue of system acceptance, where the paper elaborates on why acceptance is multifaceted and need not be approached as merely a technical problem. With the third example, the paper notes that AI systems may change a practice, suggesting that a continuous analysis of such changes is necessary for projects to maintain relevance as well as to consider the broader impact of the developed technology. The paper argues that systematic and substantial social analyses should be integral to AI development. Exploring the connections between an AI’s technical design and its social implications is key to ensuring feasible and sustainable AI systems that benefit society. The paper calls for further multi-disciplinary research initiatives that explore new ways to close the analytical gap between technical and social approaches to AI.



中文翻译:

注意间隔!关于人工智能研究的未来

对AI的研究趋向于将技术和社会问题进行分析性分离,首先将AI视为技术对象,只有在AI实施之后,才可能产生社会后果。这篇评论性文章讨论了AI研究的一些挑战如何与技术和社会分析之间的差距相关联,并提出了如何为未来AI研究实际实现繁荣合作而采取的步骤。讨论使用三个示例来说明AI系统开发不同阶段的分析差距。在计划阶段,第一个示例突出了通过在开发开始时不进行社会分析就过度简化了AI系统任务的风险。第二个示例阐明了系统接受性的问题,该文件详细阐述了为什么接受是多方面的,而不必仅仅作为一个技术问题来解决。在第三个示例中,论文指出了AI系统可能会改变一种做法,这表明需要对此类更改进行连续分析,以保持项目的相关性并考虑已开发技术的更广泛影响。本文认为,系统的和实质性的社会分析应该是人工智能发展不可或缺的部分。探索AI技术设计与其社会影响之间的联系是确保可行和可持续的AI系统造福社会的关键。本文呼吁采取进一步的多学科研究计划,以探索新方法来弥合AI技术和社会方法之间的分析鸿沟。

更新日期:2021-03-15
down
wechat
bug