当前位置: X-MOL 学术Cybersecurity › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Adversarial attack and defense in reinforcement learning-from AI security view
Cybersecurity Pub Date : 2019-03-29 , DOI: 10.1186/s42400-019-0027-x
Tong Chen , Jiqiang Liu , Yingxiao Xiang , Wenjia Niu , Endong Tong , Zhen Han

Reinforcement learning is a core technology for modern artificial intelligence, and it has become a workhorse for AI applications ranging from Atrai Game to Connected and Automated Vehicle System (CAV). Therefore, a reliable RL system is the foundation for the security critical applications in AI, which has attracted a concern that is more critical than ever. However, recent studies discover that the interesting attack mode adversarial attack also be effective when targeting neural network policies in the context of reinforcement learning, which has inspired innovative researches in this direction. Hence, in this paper, we give the very first attempt to conduct a comprehensive survey on adversarial attacks in reinforcement learning under AI security. Moreover, we give briefly introduction on the most representative defense technologies against existing adversarial attacks.

中文翻译:

强化学习中的对抗性攻防——从AI安全角度看

强化学习是现代人工智能的核心技术,它已成为从 Atrai 游戏到联网和自动车辆系统 (CAV) 的 AI 应用的主力。因此,可靠的 RL 系统是 AI 中安全关键应用的基础,这引起了比以往任何时候都更加重要的关注。然而,最近的研究发现,有趣的攻击模式对抗性攻击在强化学习的背景下针对神经网络策略也很有效,这激发了这方面的创新研究。因此,在本文中,我们首次尝试对人工智能安全下的强化学习中的对抗性攻击进行全面调查。而且,
更新日期:2019-03-29
down
wechat
bug