当前位置: X-MOL 学术J. Syst. Softw. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Multilayered review of safety approaches for machine learning-based systems in the days of AI
Journal of Systems and Software ( IF 3.5 ) Pub Date : 2021-03-06 , DOI: 10.1016/j.jss.2021.110941
Sangeeta Dey , Seok-Won Lee

The unprecedented advancement of artificial intelligence (AI) in recent years has altered our perspectives on software engineering and systems engineering as a whole. Nowadays, software-intensive intelligent systems rely more on a learning model than thousands of lines of codes. Such alteration has led to new research challenges in the engineering process that can ensure the safe and beneficial behavior of AI systems. This paper presents a literature survey of the significant efforts made in the last fifteen years to foster safety in complex intelligent systems. This survey covers relevant aspects of AI safety research including safety requirements engineering, safety-driven design at both system and machine learning (ML) component level, validation and verification from the perspective of software and system engineers. We categorize these research efforts based on a three-layered conceptual framework for developing and maintaining AI systems. We also perform a gap analysis to emphasize the open research challenges in ensuring safe AI. Finally, we conclude the paper by providing future research directions and a road map for AI safety.



中文翻译:

在AI时代,基于机器学习的系统的安全方法的多层审查

近年来,人工智能(AI)的空前发展改变了我们对软件工程和系统工程的整体看法。如今,软件密集型智能系统更多地依赖于学习模型,而不是数千行代码。这种改变导致了工程过程中的新研究挑战,这些挑战可以确保AI系统的安全和有益行为。本文介绍了过去十五年来为提高复杂智能系统的安全性而做出的重大努力的文献调查。这项调查涵盖了AI安全研究的相关方面,包括安全需求工程,系统和机器学习(ML)组件级别的安全驱动设计,从软件和系统工程师的角度进行的验证和验证。我们基于用于开发和维护AI系统的三层概念框架将这些研究工作分类。我们还进行了差距分析,以强调在确保安全AI方面的开放性研究挑战。最后,我们通过提供未来的研究方向和AI安全路线图来结束本文。

更新日期:2021-03-16
down
wechat
bug